text stringlengths 25 68.8k | quality float64 1 2.58 | source stringclasses 5 values |
|---|---|---|
The Episcopal Health Foundation embraces an emerging thread of philanthropy that utilizes strategic planning, evaluation, and learning systems. We want to be accountable for the resources we steward, and we believe communities deserve that kind of accountability. But how does strategic philanthropy differ from non-strategic philanthropy? What is the role of collaboration? There are a number of possible responses, but let me illustrate with one example.
Some communities despair at the level of social and health problems they face, despite myriad social services agencies serving the local community. What is often not recognized, though, is that it is as critical to know how agencies work together as a system as it is to know who exists and what they do, if we are interested in impact. The system dynamics map below illustrates the role of strategic philanthropy in converting a particular (but common) vicious cycle into a virtuous cycle for advancing community health.
Box 1 is our starting point: a community with multiple social service providers. Upon receiving general requests for funding, responsive (non-strategic) philanthropies might provide short-term grants to a number of local agencies working on similar issues. Such grant-making can dissipate investments and create territorialism among agencies, leading to lack of cooperation, low capacity, and duplicative investments. The result is failure to meet goals, often with little incentive for improvement. New social services providers emerge to address the local problems that have not been solved. But do we question why the challenges are not being met, and what the implications are for how we do things differently, or do we just chalk it up to "need"? Are the right partners addressing the right issues in the right ways? And do they have the capacity to achieve the goal? Or have we, as funders, actually undermined that potential through our own approaches?
A collaborative model of strategic philanthropy explicitly asks these questions, and recognizes the importance of local knowledge and collaboration in solving complex problems.
Let's take the same starting point below—Box 1's multiple social service providers—and consider how a strategic philanthropy approach might transform a vicious cycle into a virtuous cycle for community health. Instead of the purely responsive grant-making described above, let's say the philanthropy discovers that there are many local social service agencies with a similar focus, being funded by multiple donors, creating little incentive to work together.
The philanthropy approaches other local donors to open a conversation with the community about key challenges, alignment of goals among agencies, strengths and assets of agencies, and refinement of their roles and leveraging opportunities in implementation. This leads to a set of grant applications (ideally reviewed jointly by funders) that take a holistic look at the problem, and advances interdependent relationships, accountability for success, and an explicit learning process for improvement. The result is increased cooperation among local agencies and the community, increased accountability and incentive for focusing on multi-sector solutions to complex problems, and increased capacity and integration among the local players.
Goals are better achieved (or if not, lessons learned for improvement), new structures and relationships might be developed, and agencies may be consolidated with an eye toward increased capacity and integration — now we are on to something. In this scenario, we, as a community, fundamentally changed the dynamic and the systems structure to better respond to local problems because we took an approach that was both strategic and community informed. Of course, some social service providers have been trying to create such virtuous cycles for quite some time, but have not had the resources, leverage, or partners to effectively do so.
Philanthropy can be one of those partners for transforming a vicious into a virtuous cycle, and for strengthening systems, effectiveness, and sustainability for community development.
What do you think of strategic philanthropy approaches? Does this level of philanthropic engagement with communities make sense to you? Do you think that systems approaches can help us make a difference together in advancing community health transformation?
The Episcopal Health Foundation was established in 2013 and works to improve the health and well-being of the 10 million people in the 57 counties of the Episcopal Diocese of Texas. EHF has developed a transformative approach to philanthropy that supports the Foundation's role to be a catalyst to change. Specifically, EHF aims to invest in organizations and communities that can help strengthen the health system and transform community health for everyone. The Foundation is committed to being an informed, accountable, collaborative, health-focused philanthropy. Follow EHF on Twitter _USER_ | 1.687126 | openbmb/Ultra-FineWeb |
Read This Before Going Into The Kitchen
Many people view cooking as a hassle. This article will energize your cooking and make it a fun activity.
Preparation is one of the most important things that you will have to do before you cook a meal for your friends or family. This will allow you to make sure that all the necessary ingredients are accounted for. Plan your meal a day in advance to make sure you have everything you need. Taking the time to prepare yourself will guarantee that the meal will be a success.
Make sure that the knives you are using are very sharp. Dull knives make it very difficult to cut with and also extremely dangerous to use. It is more likely that you will cut yourself while trying to get a dull knife to cut something than when you use a well-sharpened knife.
Does it upset you to discard fruit that is moldy? Have you ever thought to just remove the moldy section by cutting it out? Unfortunately, there isn't any way to save fruit that has begun to rot. Throw half-rotten fruit away, because mold can go much deeper than your eyes can see; you can get very sick from eating food with mold.
It can also be used on other foods, rather than just on meat. Add this savory seasoning to snack foods like Chex mix, or use it to spice up your omelet. Your guests will be clamoring to know the secret to your delicious food.
When you are cooking, use fresh ingredients instead of frozen or dried ones for the best results. Use whatever seasonal ingredients are available to create tasty dishes, which are also less expensive.
Mushrooms absorb large amounts of water, and this excess water will go into whatever you are cooking with. Wipe them off with a damp cloth to clean them.
If you have a recipe that requires water or milk, you can change out the liquids to create a whole new flavor. Use chicken broth, beef stock or vegetable broth instead. Sour cream, buttermilk and yogurt make good milk substitutes. Using liquid substitutions in your cooking can increase nutrition and give a standby dish a new flavor.
It is better to sprinkle your seasoning little by little rather than putting it all on at the start. Seasoning in this manner offers several advantages. First, you are maximizing the flavor of whatever you are cooking. Second, you are utilizing ingredients in the best way possible.
Hopefully this article has conceived you that cooking everyday does not have to equal a life spent chained to the stove. Taking the advice offered here can change your meal preparation time from humdrum to lively. | 1.393497 | Zyphra/Zyda-2 |
This Common Shrub is Highly Fatal to Horses
The laurel plant is a common shrub, mainly found in the Southeastern United Sates in open wooded areas, mountainous areas, and in both dry and wet land areas. All types of laurel are toxic to horses, with the severity of the reaction based on the health of the horse. The age and size of the horse are also components of the type of reaction that can be experienced. Hydrogen cyanide, the active ingredient that is found in the laurel, can be fatal to even the healthiest of horses and so cases of poisoning should be diagnosed as soon as possible to ensure survival of the horse. Prompt care is essential for a full recovery.
The leaves of the laurel are thick and a bit rubbery in texture, with clusters of white to pink flowers when in bloom. The taste of the plant is bitter, so instances of poisoning are relatively few. Horses typically will not eat from this plant unless there is a lack of other grasses to graze on, or the laurel is growing amongst other grazing plants.
Symptoms and Types
- Quickened heart beat
- Slowed heart beat
- Difficulty breathing
- Weakness - Difficulty performing natural amounts of work
- Loss of condition
- Cardiac failure
- Respiratory failure
Laurel toxicity occurs following ingestion of leaves, stems, or flowers from the laurel shrub. Hydrogen cyanide poisoning, the toxic ingredient, combines with hemoglobin and oxygen to prevents oxygen from being distributed to the cells. Systemic poisoning often results in poisoning of the heart muscle.
It can be difficult to figure out what is wrong with your horse during the earliest stages of laurel poisoning. The diagnosis of this condition often comes post-mortem, which is why it is so important to ensure that horses do not have access to laurel bushes or anything that may have parts of laurel bush in them. A veterinarian should always be consulted when laurel poisoning or any type of poisoning is suspected in your horse.
Since laurel poisoning affects a horse in much different ways than other types of plant poisoning the methods of treatment are often much different. Artificial respiration is the first and most common step for treatment of this condition; a high concentration of oxygen is the best method for increasing the chances of survival.
Living and Management
The unfortunate reality of laurel poisoning is that many horses do not survive it. When the diagnosis is made most horses that have ingested laurel bushes are either recovering from the effects of the toxicity or have passed away from it. An ounce of prevention is worth a pound of treatment. Before you place your horse on land, you must ensure that all toxic plants, including laurel, have been removed entirely. Frequent property checks for new plant growth, and making sure that your horse is well fed and has plenty of healthy grass to graze on are also important for ensuring that your horse will not feel the need to graze on toxic plants.
The protein that moves oxygen in the blood
The act of feeding animals with a range or pasture
The eating of grasses and plants that are low to the ground | 1.970577 | openbmb/Ultra-FineWeb |
trometric analysis did not show the presence of the known neuroexcitatory amino acids, kainic acid, domoic acid and NMDA. We conclude that G. marginata possess substances with neurotoxic and lethal activities.
15. A STUDY OF THE CHAMELEON-I DARK CLOUD AND T-ASSOCIATION .6. INTERSTELLAR POLARIZATION, GRAIN ALIGNMENT AND MAGNETIC-FIELD
NARCIS (Netherlands)
WHITTET, DCB; GERAKINES, PA; CARKNER, AL; HOUGH, JH; MARTIN, PG; PRUSTI, T; KILKENNY, D
1994-01-01
We present new measurements of optical and near-infrared linear polarization towards 39 field stars reddened by dust in the Chamaeleon I dark cloud. New and previously published data are combined in a detailed investigation of the wavelength dependence of interstellar polarization in the cloud. The
16. Simulation of polar stratospheric clouds in the chemistry-climate-model EMAC via the submodel PSC
Directory of Open Access Journals (Sweden)
O. Kirner
2011-03-01
Full Text Available The submodel PSC of the ECHAM5/MESSy Atmospheric Chemistry model (EMAC has been developed to simulate the main types of polar stratospheric clouds (PSC. The parameterisation of the supercooled ternary solutions (STS, type 1b PSC in the submodel is based on Carslaw et al. (1995b, the thermodynamic approach to simulate ice particles (type 2 PSC on Marti and Mauersberger (1993. For the formation of nitric acid trihydrate (NAT particles (type 1a PSC two different parameterisations exist. The first is based on an instantaneous thermodynamic approach from Hanson and Mauersberger (1988, the second is new implemented and considers the growth of the NAT particles with the aid of a surface growth factor based on Carslaw et al. (2002. It is possible to choose one of this NAT parameterisation in the submodel. This publication explains the background of the submodel PSC and the use of the submodel with the goal of simulating realistic PSC in EMAC.
17. Application of circularly polarized laser radiation for sensing of crystal clouds.
Science.gov (United States)
Balin, Yurii; Kaul, Bruno; Kokhanenko, Grigorii; Winker, David
2009-04-13
The application of circularly polarized laser radiation and measurement of the fourth Stokes parameter of scattered radiation considerably reduce the probability of obtaining ambiguous results for radiation depolarization in laser sensing of crystal clouds. The uncertainty arises when cloud particles appear partially oriented by their large diameters along a certain azimuth direction. Approximately in 30% of all cases, the measured depolarization depends noticeably on the orientation of the lidar reference plane with respect to the particle orientation direction. In this case, the corridor of the most probable depolarization values is about 0.1-0.15, but in individual cases, it can be noticeably wider. The present article considers theoretical aspects of this phenomenon and configuration of a lidar capable of measuring the fourth Stokes parameter together with an algorithm of lidar signal processing in the presence of optically thin cloudiness when molecular scattering cannot be neglected. It is demonstrated that the element ?44 of the normalized backscattering phase matrix (BSPM) can be measured. Results of measurements are independent of the presence or absence of azimuthal particle orientation. For sensing in the zenith or nadir, this element characterizes the degree of horizontal orientation of long particle diameters under the action of aerodynamic forces arising during free fall of particles.
18. Investigating Gravity Waves in Polar Mesospheric Clouds Using Tomographic Reconstructions of AIM Satellite Imagery
Science.gov (United States)
Hart, V. P.; Taylor, M. J.; Doyle, T. E.; Zhao, Y.; Pautet, P.-D.; Carruth, B. L.; Rusch, D. W.; Russell, J. M.
2018-01-01
This research presents the first application of tomographic techniques for investigating gravity wave structures in polar mesospheric clouds (PMCs) imaged by the Cloud Imaging and Particle Size instrument on the NASA AIM satellite. Albedo data comprising consecutive PMC scenes were used to tomographically reconstruct a 3-D layer using the Partially Constrained Algebraic Reconstruction Technique algorithm and a previously developed "fanning" technique. For this pilot study, a large region (760 × 148 km) of the PMC layer (altitude 83 km) was sampled with a 2 km horizontal resolution, and an intensity weighted centroid technique was developed to create novel 2-D surface maps, characterizing the individual gravity waves as well as their altitude variability. Spectral analysis of seven selected wave events observed during the Northern Hemisphere 2007 PMC season exhibited dominant horizontal wavelengths of 60-90 km, consistent with previous studies. These tomographic analyses have enabled a broad range of new investigations. For example, a clear spatial anticorrelation was observed between the PMC albedo and wave-induced altitude changes, with higher-albedo structures aligning well with wave troughs, while low-intensity regions aligned with wave crests. This result appears to be consistent with current theories of PMC development in the mesopause region. This new tomographic imaging technique also provides valuable wave amplitude information enabling further mesospheric gravity wave investigations, including quantitative analysis of their hemispheric and interannual characteristics and variations.
19. Universal power law of the gravity wave manifestation in the AIM CIPS polar mesospheric cloud images
Directory of Open Access Journals (Sweden)
P. Rong
2018-01-01
Full Text Available We aim to extract a universal law that governs the gravity wave manifestation in polar mesospheric clouds (PMCs. Gravity wave morphology and the clarity level of display vary throughout the wave population manifested by the PMC albedo data. Higher clarity refers to more distinct exhibition of the features, which often correspond to larger variances and a better-organized nature. A gravity wave tracking algorithm based on the continuous Morlet wavelet transform is applied to the PMC albedo data at 83 km altitude taken by the Aeronomy of Ice in the Mesosphere (AIM Cloud Imaging and Particle Size (CIPS instrument to obtain a large ensemble of the gravity wave detections. The horizontal wavelengths in the range of ∼ 20–60 km are the focus of the study. It shows that the albedo (wave power statistically increases as the background gets brighter. We resample the wave detections to conform to a normal distribution to examine the wave morphology and display clarity beyond the cloud brightness impact. Sample cases are selected at the two tails and the peak of the normal distribution to represent the full set of wave detections. For these cases the albedo power spectra follow exponential decay toward smaller scales. The high-albedo-power category has the most rapid decay (i.e., exponent = −3.2 and corresponds to the most distinct wave display. The wave display becomes increasingly blurrier for the medium- and low-power categories, which hold the monotonically decreasing spectral exponents of −2.9 and −2.5, respectively. The majority of waves are straight waves whose clarity levels can collapse between the different brightness levels, but in the brighter background the wave signatures seem to exhibit mildly turbulent-like behavior.
20. Universal power law of the gravity wave manifestation in the AIM CIPS polar mesospheric cloud images
Science.gov (United States)
Rong, Pingping; Yue, Jia; Russell, James M., III; Siskind, David E.; Randall, Cora E.
2018-01-01
We aim to extract a universal law that governs the gravity wave manifestation in polar mesospheric clouds (PMCs). Gravity wave morphology and the clarity level of display vary throughout the wave population manifested by the PMC albedo data. Higher clarity refers to more distinct exhibition of the features, which often correspond to larger variances and a better-organized nature. A gravity wave tracking algorithm based on the continuous Morlet wavelet transform is applied to the PMC albedo data at 83 km altitude taken by the Aeronomy of Ice in the Mesosphere (AIM) Cloud Imaging and Particle Size (CIPS) instrument to obtain a large ensemble of the gravity wave detections. The horizontal wavelengths in the range of ˜ 20-60 km are the focus of the study. It shows that the albedo (wave) power statistically increases as the background gets brighter. We resample the wave detections to conform to a normal distribution to examine the wave morphology and display clarity beyond the cloud brightness impact. Sample cases are selected at the two tails and the peak of the normal distribution to represent the full set of wave detections. For these cases the albedo power spectra follow exponential decay toward smaller scales. The high-albedo-power category has the most rapid decay (i.e., exponent = -3.2) and corresponds to the most distinct wave display. The wave display becomes increasingly blurrier for the medium- and low-power categories, which hold the monotonically decreasing spectral exponents of -2.9 and -2.5, respectively. The majority of waves are straight waves whose clarity levels can collapse between the different brightness levels, but in the brighter background the wave signatures seem to exhibit mildly turbulent-like behavior.
1. Comparing lightning polarity and cloud microphysical properties over regions of high ground flash density | 1.204116 | EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample |
Semiconductor device fabrication
Semiconductor device fabrication is the process used to create chips, the integrated circuits that are present in everyday electrical and electronic devices. It is a multiple-step sequence of photographic and chemical processing steps during which electronic circuits are gradually created on a wafer made of pure semiconducting material. Silicon is the most commonly used semiconductor material today, along with various compound semiconductors.
The entire manufacturing process from start to packaged chips ready for shipment takes six to eight weeks and is performed in highly specialized facilities referred to as fabs.
A typical wafer is made out of extremely pure silicon that is grown into mono-crystalline cylindrical ingots (boules) up to 300 mm (slightly less than 12 inches) in diameter using the Czochralski process. These ingots are then sliced into wafers about 0.75 mm thick and polished to obtain a very regular and flat surface.
Once the wafers are prepared, many process steps are necessary to produce the desired semiconductor integrated circuit. In general, the steps can be grouped into two areas:
- Front end processing
- Back end processing
In semiconductor device fabrication, the various processing steps fall into four general categories: deposition, removal, patterning, and modification of electrical properties.
- Deposition is any process that grows, coats, or otherwise transfers a material onto the wafer. Available technologies consist of physical vapor deposition (PVD), chemical vapor deposition (CVD), electrochemical deposition (ECD), molecular beam epitaxy (MBE) and more recently, atomic layer deposition (ALD) among others.
- Removal processes are any that remove material from the wafer either in bulk or selective form and consist primarily of etch processes, both wet etching and dry etching such as reactive ion etch (RIE). Chemical-mechanical planarization (CMP) is also a removal process used between levels.
- Patterning covers the series of processes that shape or alter the existing shape of the deposited materials and is generally referred to as lithography. For example, in conventional lithography, the wafer is coated with a chemical called a "photoresist". The photoresist is exposed by a "stepper", a machine that focuses, aligns, and moves the mask, exposing select portions of the wafer to short wavelength light. The unexposed regions are washed away by a developer solution. After etching or other processing, the remaining photoresist is removed by plasma ashing.
- Modification of electrical properties has historically consisted of doping transistor sources and drains originally by diffusion furnaces and later by ion implantation. These doping processes are followed by furnace anneal or in advanced devices, by rapid thermal anneal (RTA) which serve to activate the implanted dopants. Modification of electrical properties now also extends to reduction of dielectric constant in low-k insulating materials via exposure to ultraviolet light in UV processing (UVP).
Many modern chips have eight or more levels produced in over 300 sequenced processing steps.
Front End Processing
"Front End Processing" refers to the formation of the transistors directly on the silicon. The raw wafer is engineered by the growth of an ultrapure, virtually defect-free silicon layer through epitaxy. In the most advanced logic devices, prior to the silicon epitaxy step, tricks are performed to improve the performance of the transistors to be built. One method involves introducing a "straining step" wherein a silicon variant such as "silicon-germanium" (SiGe) is deposited. Once the epitaxial silicon is deposited, the crystal lattice becomes stretched somewhat, resulting in improved electronic mobility. Another method, called "silicon on insulator" technology involves the insertion of an insulating layer between the raw silicon wafer and the thin layer of subsequent silicon epitaxy. This method results in the creation of transistors with reduced parasitic effects.
Front end surface engineering is followed by: growth of the gate dielectric, traditionally silicon dioxide (SiO2), patterning of the gate, patterning of the source and drain regions, and subsequent implantation or diffusion of dopants to obtain the desired complementary electrical properties. In memory devices, storage cells, conventionally capacitors, are also fabricated at this time, either into the silicon surface or stacked above the transistor.
Once the various semiconductor devices have been created they must be interconnected to form the desired electrical circuits. This "Back End Of Line" (BEOL – the latter portion of the front end of wafer fabrication, not to be confused with "back end" of chip fabrication which refers to the package and test stages) involves creating metal interconnecting wires that are isolated by insulating dielectrics. The insulating material was traditionally a form of SiO2 or a silicate glass, but recently new low dielectric constant materials are being used. These dielectrics presently take the form of SiOC and have dielectric constants around 2.7 (compared to 3.9 for SiO2), although materials with constants as low as 2.2 are being offered to chipmakers.
Historically, the metal wires consisted of aluminium. In this approach to wiring often called "subtractive aluminium", blanket films of aluminium are deposited first, patterned, and then etched, leaving isolated wires. Dielectric material is then deposited over the exposed wires. The various metal layers are interconnected by etching holes, called "vias," in the insulating material and depositing tungsten in them with a CVD technique. This approach is still used in the fabrication of many memory chips such as dynamic random access memory (DRAM) as the number of interconnect levels is small, currently no more than four.
More recently, as the number of interconnect levels for logic has substantially increased due to the large number of transistors that are now interconnected in a modern microprocessor, the timing delay in the wiring has become significant prompting a change in wiring material from aluminium to copper and from the silicon dioxides to newer low-K material. This performance enhancement also comes at a reduced cost via damascene processing that eliminates processing steps. In damascene processing, in contrast to subtractive aluminium technology, the dielectric material is deposited first as a blanket film and is patterned and etched leaving holes or trenches. In "single damascene" processing, copper is then deposited in the holes or trenches surrounded by a thin barrier film resulting in filled vias or wire "lines" respectively. In "dual damascene" technology, both the trench and via are fabricated before the deposition of copper resulting in formation of both the via and line simultaneously, further reducing the number of processing steps. The thin barrier film, called Copper Barrier Seed (CBS), is necessary to prevent copper diffusion into the dielectric. The ideal barrier film is effective, but is barely there. As the presence of excessive barrier film competes with the available copper wire cross section, formation of the thinnest yet continuous barrier represents one of the greatest ongoing challenges in copper processing today.
As the number of interconnect levels increases, planarization of the previous layers is required to ensure a flat surface prior to subsequent lithography. Without it, the levels would become increasingly crooked and extend outside the depth of focus of available lithography, interfering with the ability to pattern. CMP (Chemical Mechanical Polishing) is the primary processing method to achieve such planarization although dry "etch back" is still sometimes employed if the number of interconnect levels is no more than three.
The highly serialized nature of wafer processing has increased the demand for metrology in between the various processing steps. Wafer test metrology equipment is used to verify that the wafers are still good and haven't been damaged by previous processing steps. If the number of dies—the integrated circuits that will eventually become chips—on a wafer that measure as fails exceeds a predetermined threshold, the wafer is scrapped rather than investing in further processing.
Once the Front End Process has been completed, the semiconductor devices are subjected to a variety of electrical tests to determine if they function properly. The proportion of devices on the wafer found to perform properly is referred to as the yield.
The fab tests the chips on the wafer with an electronic tester that presses tiny probes against the chip. The machine marks each bad chip with a drop of dye. The fab charges for test time; the prices are on the order of cents per second. Chips are often designed with "testability features" to speed testing, and reduce test costs.
Good designs try to test and statistically manage corners: extremes of silicon behavior caused by operating temperature combined with the extremes of fab processing steps. Most designs cope with more than 64 corners.
Once tested, the wafer is scored and then broken into individual die. Only the good, undyed chips go on to be packaged.
Plastic or ceramic packaging involves mounting the die, connecting the die pads to the pins on the package, and sealing the die. Tiny wires are used to connect pads to the pins. In the old days, wires were attached by hand, but now purpose-built machines perform the task. Traditionally, the wires to the chips were gold, leading to a "lead frame" (pronounced "leed frame") of copper, that had been plated with solder, a mixture of tin and lead. Lead is poisonous, so lead-free "lead frames" are now the best practice.
Chip-scale package (CSP) is another packaging technology. Plastic packaged chips are usually considerably larger than the actual die, whereas CSP chips are nearly the size of the die. CSP can be constructed for each die before the wafer is diced.
The packaged chips are retested to ensure that they were not damaged during packaging and that | 2.42224 | HuggingFaceFW/fineweb-edu |
The Bronx Borough President Vanessa L. Gibson has partnered with the Association for the Study of African American Life and History (ASALH) to organize an event in commemoration of Abolition Day.
To make the even more colorful and memorable, the Office of the Bronx BP Gibson honored some African American leaders in appreciation of their contributions to the development of the county.
"Congratulations to all of our honorees! They have worked tirelessly for decades to uplift the next generation of leaders and we proudly stand on their shoulders," BP Gibson said.
ASALH is the oldest and largest historical society established for the promotion of African American history.
Before the signing of the Emancipation Proclamation in 1863 and the enforcement of General Order No. 3 in Texas in 1865(Juneteenth), New York State abolished slavery within its borders becoming the first state to do so.
On July 4th, 1827 New York decreed the freedom of African American adult men and women who had been born before July 4th, 1799. | 1.378361 | m-a-p/FineFineWeb |
Direct torque control (DTC) is one of the most excellent control strategies of torque control in induction machine. It is considered as an alternative to the field oriented control (FOC) or vector control technique. These two control strategies are different on the operation principle but their objectives are the same. They aim to control effectively the torque and flux. Torque control of an induction machine based on DTC strategy has been developed and a comprehensive study is present in this research. The performance of this control method has been demonstrated by simulations performed using a versatile simulation package, Matlab/Simulink. Several numerical simulations have been carried out in a steady state and transient operation on a speed control mode.
1. Direct torque control
2. Induction machine
3. Vector control
Fig.1 Direct torque control of induction machine
Fig. 2: Developed model of direct torque control of induction machine
EXPECTED SIMULATION RESULTS:
(a)Estimated torque (b) Rotor speed
(c) Stator flux magnitude (d) Stator flux magnitude constrained in hysteresis band
(e) Locus of the stator flux (f) D-axis and q-axis stator current
(G) Stator current
Fig. 3: a-g: DTC behaviour during start-up with no load
Fig. 4: Steady state response of rotor speed with no load
Fig. 5: Dynamic behaviour of DTC during a load Torque step command from +20 Nm to -20 Nm
Fig. 6: The effect of flux hysteresis band on the DTC performance
The work carried out in this paper is aimed and focused to develop a direct torque control simulink model. The DTC architecture allows the independent and decoupled control of torque and stator flux. The implementation of the DTC model has been deeply described and justified its realization. In order to show the effectiveness of the model, a numerical simulation has been performed on a 7.5 kW induction machine fed by an IGBT PWM inverter. The feasibility and the validity of the developed DTC model, based on switching table technique, have been proved by simulation results obtained in the torque control mode.
1. Casadei, D., G. Gandi, G. Serra and A. Tani, 1994. Effect of flux and torque hysteresis band amplitude in direct torque control of Induction Machine. Proc. IECON'94, Bologna, Italy, 299-304.
2. Casadei, D., F. Profumo, G. Serra and A. Tani, 2002. FOC and DTC: two viable schemes for induction motors torque control. IEEE Trans. Power Electronics, 17(5): 779-787.
3. 3. Chapuis, Y.A. and D. Roye, 1998. Direct torque control and current limitation method in start up of an induction machine. IEEE Conf. Power Electronics and Variable Speed Drives, 451-455
4. 4. Takahashi, I. and Y. Ohmori, 1989. High Performace direct torque control of induction motor. IEEE Trans. Ind. Appl. 25 (2): 257-264.
5. Vas, P., 1990. Vector Control of a.c. machines.Oxford University Press. | 1.185056 | openbmb/Ultra-FineWeb |
of fathers or in second and succeeding generations of offspring has generally been rejected on theoretical grounds and for lack of evidence.[85] However, a range of male-mediated abnormalities have been demonstrated, and more are likely to exist.[86] FDA label information for Vidaza, a formulation of 5-azacitidine (an unmethylatable analog of cytidine that causes hypomethylation when incorporated into DNA) states that "men should be advised not to father a child" while using the drug, citing evidence in treated male mice of reduced fertility, increased embryo loss, and abnormal embryo development.[87] In rats, endocrine differences were observed in offspring of males exposed to morphine.[88] In mice, second generation effects of diethylstilbesterol have been described occurring by epigenetic mechanisms.[89]
Cancer subtypes[edit]
Prostate cancer[edit]
Prostate cancer kills around 35,000 men yearly, and about 220,000 men are diagnosed with prostate cancer per year, in North America alone.[90] Prostate cancer is the second leading cause of cancer-caused fatalities in men, and within a man's lifetime, one in six men will have the disease.[90] Alterations in histone acetylation and DNA methylation occur in various genes influencing prostate cancer.[91] More than 90% of prostate cancers show gene silencing by CpG island hypermethylation of the GSTP1 gene promoter, which protects prostate cells from genomic damage that is caused by different oxidants or carcinogens.[92] Real-time methylation-specific polymerase chain reaction (PCR) suggests that many other genes are also hypermethylated.[92] Gene expression in the prostate may be modulated by nutrition and lifestyle changes.[93]
Cervical cancer[edit]
The second most common malignant tumor in women is invasive cervical cancer (ICC) and more than 50% of all invasive cervical cancer (ICC) is caused by oncongenic human papillomavirus 16 (HPV16).[94] Furthermore, cervix intraepithelial neoplasia (CIN) is primarily caused by oncogenic HPV16.[94] As in many cases, the causative factor for cancer does not always take a direct route from infection to the development of cancer. Genomic methylation patterns have been associated with invasive cervical cancer. Within the HPV16L1 region, 14 tested CpG sites have significantly higher methylation in CIN3+ than in HPV16 genomes of women without CIN3.[94] Only 2/16 CpG sites tested in HPV16 upstream regulatory region were found to have association with increased methylation in CIN3+.[94] This suggests that the direct route from infection to cancer is sometimes detoured to a precancerous state in cervix intraepithelial neoplasia. Additionally, increased CpG site methylation was found in low levels in most of the five host nuclear genes studied, including 5/5 TERT, 1/4 DAPK1, 2/5 RARB, MAL, and CADM1.[94] Furthermore, 1/3 of CpG sites in mitochondrial DNA were associated with increased methylation in CIN3+.[94] Thus, a correlation exists between CIN3+ and increased methylation of CpG sites in the HPV16 L1 open reading frame.[94] This could be a potential biomarker for future screens of cancerous and precancerous cervical disease.[94]
Leukemia[edit]
Recent studies have shown that the mixed-lineage leukemia (MLL) gene causes leukemia by rearranging and fusing with other genes in different chromosomes, which is a process under epigenetic control.[95]
Sarcoma[edit]
They are about 15,000 new cases of sarcoma in the US each year, and about 6,200 people were projected to die of sarcoma in the US in 2014.[96] Sarcomas comprise a large number of rare, histogenetically heterogeneous mesenchymal tumors that for example include chondrosarcoma, Ewing's sarcoma, leiomyosarcoma, liposarcoma, osteosarcoma, synovial sarcoma, and (alveolar and embryonal) rhabdomyosarcoma. Several oncogenes and tumor suppressor genes are epigenetically altered in sarcomas. These include APC, CDKN1A, CDKN2A, CDKN2B, Ezrin, FGFR1, GADD45A, MGMT, STK3, STK4, PTEN, RASSF1A, WIF1, as well as several miRNAs.[97] Expression of epigenetic modifiers such as that of the BMI1 component of the PRC1 complex is deregulated in chondrosarcoma, Ewing's sarcoma, and osteosarcoma, and expression of the EZH2 component of the PRC2 complex is altered in Ewing's sarcoma and rhabdomyosarcoma. Similarly, expression of another epigenetic modifier, the LSD1 histone demethylase, is increased in chondrosarcoma, Ewing's sarcoma, osteosarcoma, and rhabdomyosarcoma. Drug targeting and inhibition of EZH2 in Ewing's sarcoma,[98] or of LSD1 in several sarcomas,[99] inhibits tumor cell growth in these sarcomas.
Identification methods[edit]
Previously, epigenetic profiles were limited to individual genes under scrutiny by a particular research team. Recently, however, scientists have been moving toward a more genomic approach to determine an entire genomic profile for cancerous versus healthy cells.[3]
Popular approaches for measuring CpG methylation in cells include:
Since bisulfite sequencing is considered the gold standard for measuring CpG methylation, when one of the other methods is used, results are usually confirmed using bisulfite sequencing[1]. Popular approaches for determining histone modification profiles in cancerous versus healthy cells include:[3]
Diagnosis and prognosis[edit]
Researchers are hoping to identify specific epigenetic profiles of various types and subtypes of cancer with the goal of using these profiles as tools to diagnose individuals more specifically and accurately.[3] Since epigenetic profiles change, scientists would like to use the different epigenomic profiles to determine the stage of development or level of aggressiveness of a particular cancer in patients. For example, hypermethylation of the genes coding for Death-Associated Protein Kinase (DAPK), p16, and Epithelial Membrane Protein 3 (EMP3) have been linked to more aggressive forms of lung, colorectal, and brain cancers.[10] This type of knowledge can affect the way that doctors will diagnose and choose to treat their patients.
Another factor that will influence the treatment of patients is knowing how well they will respond to certain treatments. Personalized epigenomic profiles of cancerous cells can provide insight into this field. For example, MGMT is an enzyme that reverses the addition of alkyl groups to the nucleotide guanine.[100] Alkylating guanine, however, is the mechanism by which several chemotherapeutic drugs act in order to disrupt DNA and cause cell death.[101][102][103][104] Therefore, if the gene encoding MGMT in cancer cells is hypermethylated and in effect silenced or repressed, the chemotherapeutic drugs that act by methylating guanine will be more effective than in cancer cells that have a functional MGMT enzyme.
Epigenetic biomarkers can also be utilized as tools for molecular prognosis. In primary tumor and mediastinal lymph node biopsy samples, hypermethylation of both CDKN2A and CDH13 serves as the marker for increased risk of faster cancer relapse and higher death rate of patients.[105]
Treatment[edit]
Epigenetic control of the proto-onco regions and the tumor suppressor sequences by conformational changes in histones plays a role in the formation and progression of cancer.[106] Pharmaceuticals that reverse epigenetic changes might have a role in a variety of cancers.[91][106][107]
Recently, it is evidently known that associations between specific cancer histotypes and epigenetic changes can facilitate the development of novel epi-drugs.[108] Drug development has focused mainly on modifying DNA methyltransferase, histone acetyltransferase (HAT) and histone deacetylase (HDAC).[109]
Drugs that specifically target the inverted methylation pattern of cancerous cells include the DNA methyltransferase inhibitors azacitidine[110][111] and decitabine.[112][113] These hypomethylating agents are used to treat myelodysplastic syndrome,[114] a blood cancer produced by abnormal bone marrow stem cells.[5] These agents inhibit all three types of active DNA methyltransferases, and had been thought to be highly toxic, but proved to be effective when used in low dosage, reducing progression of myelodysplastic syndrome to leukemia.[115]
Histone deace | 1.770808 | EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample |
" and "CCC" as derived class of "AAA".
if (m_ParsingTypedef)
// m_ParsingTypedef = false;
m_Str = newToken->m_Name;
Maybe, you would ask a question: where do these Tokens store? The answer is that all the Tokens will be recorded in "TokenTree class".
For example, If you add three item to the TokenTree.
- "" (0)
\- "p" (4)
+- "hysi" (2)
| +- "cs" (1)
| \- "ology" (3)
\- "sychic" (5)
Animation of Patricia tree
A better animation demo of Patricia tree can be found in Search Tree Viewer. In this page, you can select the tree type, and enter some strings, then the tree will dynamically show in the window.
SearchTree Node
Node Lable
Cb tree node lable.png
How a new item is added to the tree
When you add a new key (string) to the tree, some node should be splited, and new node need to be added. For example this postshow some details.
Suppose we have two nodes in the tree, the node id is 0 (root node) and 1, the tree hold only one key string(item) "physics":
Adding word: physics
1 items in the tree
- "" (0)
\- "physics" (1)
Now, if another key string "physiology" is added to the tree, we now firstly check "physiology" is already in the tree, if not, we need to find the common string "physi", so the node 1 should be splited after the common string, so you get the following result.
Adding word: physiology
2 items in the tree
- "" (0)
\- "physi" (2)
+- "cs" (1)
\- "ology" (3)
Here, the middle node "physi" is added. the original node 1's lable is shorten to "cs", and a new node "ology" is added as a new child node to the middle node.
In the other case, the new added item does not cause a new node to be added to the tree, only a node need to be extended by its ledge. For example:
6 items in the tree
- "" (0)
\- "p" (4)
+- "aranormal" (8)
+- "hysi" (2)
| +- "cs" (1)
| \- "ology" (3)
\- "sych" (6)
+- "i" (9)
| +- "atrist" (10)
| \- "cs" (5)
\- "otic" (7)
Now, you add a new item "psychiatrists", pay attention to the node 10 in the above tree, we have already an item "psychiatrist", when the new item "psychiatrists" is added, we can just extend the node 10's edge from "artrist" to "artrists", thus no new node is needed.
7 items in the tree
- "" (0)
\- "p" (4)
+- "aranormal" (8)
+- "hysi" (2)
| +- "cs" (1)
| \- "ology" (3)
\- "sych" (6)
+- "i" (9)
| +- "atrists" (10)
| \- "cs" (5)
\- "otic" (7)
Note that now, the node 20 now contains two items. It is a map (depth -> item).
depth -> item
12 -> "psychiatrist"
13 -> "psychiatrists"
How to query a Token by a keyword
The parser collect all the Token information, and stored them in the TokenTree, the GUI function can query keywords from the database to show function tips or to build a Class Browser tree.
void ab()
Flexible Parser structure
The Parser class( in Parser.cpp and Parser.h ) has re-factored to support every project associate with a Parser object (svn revision > 6268 ). Which means: One Parser object per project, so, every project (xxx.cbp) will hold its own Token macro defines and Token trees.
Due to the new introduced conditional preprocessor handling mechanism, the same source file may give different Tokens due to the different macro defines in different project. Also, CC support parsing on the files which does not belong to any project.
1. You open a project in CB, then all the files belong to the project( related to the project) will be parsed in CC.
2. Now, you open another file( we call it a "separate file" later) which is not belong to the current project nor any other opened project.
3. Then the CC will create a "temporary parser" named "NONE" parser, and add this separate file to the "NONE" parser.
4. So, you can still see the class tree when viewing the separate file.
5. Once you close this separate file, the "NONE" parser will automatically be removed.
The main idea is: Now, We can let the CC do parse on some separate files and support class browsing in Symbol browser.
Automatic Code Completion
Find the search scope
CC SearchScope.png
Break up the current statement
Do an AI match
CC AI Match.png
Here is the matching algorithm:
Code completion debugging support
Debug Log output
wxString name;
wxString args;
wxString m_Str;
codeblocks.exe --debug-log
Debug Log output panel
TRACE macro support
As you can see in the source file like "tokenizer.cpp" or "parserthread.cpp", there is a macro definition in the beginning of the source.
_TAG_ TRACE(format, args...) \
Manager::Get()->GetLogManager()->DebugLog(F(format, ##args))
_TAG_ TRACE2(format, args...)
_TAG_ TRACE(format, args...) \
do \
{ \
if (g_EnableDebugTrace) \
} \
while (false)
_TAG_ TRACE2(format, args...) \
_TAG_ TRACE(format, args...)
_TAG_ TRACE2(format, args...)
This is a quite complex macro definition. So, if CC_CODECOMPLETION_DEBUG_OUTPUT is defined as 0, these debugging macros were disabled. If CC_CODECOMPLETION_DEBUG_OUTPUT is defined as 1, only the TRACE macro will be defined, so you can plot some messages to the Debug log. It is always the parser will parser many files, so the debug messages will be flooding. If we only interest in the debug message when we are parsing a specific source file, the best way was define CC_CODECOMPLETION_DEBUG_OUTPUT as 2. Now, TRACE macro will plot the message only in one source file. The source file name can be specified in token.cpp
const wxString g_DebugTraceFile = _T("myfile.cpp");
This way, only the debug message when parsing "myfile.cpp" will be traced.
Code-Completion debug tool dialog
When you press shift and ctrl key and double click on any entry of the navigator tree entry, a debug tool dialog will pop up to give a more detail about the selected token. You can query its information such as its member variables, its Ancestors and so on.
Debug Smart Sense log output
When you hold theshift and ctrl key and right click on any entry of the navigator tree entry, the context menu will have a "Debug SmartSense" menu entry shown. You can click it to enable it. See the image blow. Then, all the debug log information when doing an Auto-completion will be shown in the "Debug Log" panel.
Cc debug smartsense.png
Mutexs and lockers in CC
We need lockers to avoid multiply thread issues of wxString. More detailed can be found: wxString and the locker issue in our CC code, to avoid such issue, we need lockers. There are three main kind of lockers. wxMutex s_ParserMutex; wxMutex s_TokenTreeMutex; wxMutex m_ClassBrowserBuilderThreadMutex; The first one was to protect multiply access to Parser objects, this is a static variable. The second one tries to forbid the multiply access to the TokenTree, the last one is used to build the symbol browser tree(usually show in the left docked panels of the C::B main frame)
The associated Macros are below:
// For tracking, either uncomment:
//...or:
//..or none of the above.
// [1] Implementations for tracking mutexes:
_TAG_ THREAD_LOCKER_MTX_LOCK(NAME) \
CCLogger::Get()->DebugLog(F(_T("%s.Lock() : %s(), %s, %d"), \
wxString(_TAG_, wxConvUTF8).wx_str(), \
wxString(__FUNCTION__, wxConvUTF8).wx_str(), \
wxString(__FILE__, wxConvUTF8).wx_str(), \
_TAG_ THREAD_LOCKER_MTX_LOCK_SUCCESS(NAME) \
CCLogger::Get()->DebugLog(F(_T("%s.Lock().Success() : %s(), %s, %d"), \
wxString(__FUNCTION__, wxConvUTF8).wx_str(), \
wxString(__FILE__, wx | 1.371009 | Zyphra/Zyda-2 |
If you're changing out your pressure tank for a larger one, or needing to replace it, following the proper steps will allow you to get this done successfully.
There are different brands and sizes of pressure tanks. Consult a well driller to make sure the new pressure tank matches the GPM and HP of the well pump that you have as you do not want your pressure tank to be over sized. Over sizing the system will wear out your pump since your pump would be constantly on.
Things You'll Need
- Plumber's wrench
- Teflon pipe thread tape
Turn off the electricity to the well at the breaker.
Turn on your water faucets and drain them. Wait for the water to stop before beginning installation.
Unscrew the pipe underneath the pressure tank with your plumber's wrench. Do this slowly, allowing it to release the excess pressure as you go.
Remove the tank after fully unscrewing the pipe.
Install the new tank in the old tank's place. Wrap pipe thread tape on the threads of the pipe, going fully around three or four times. Screw the pipe into the pressure tank.
Turn back on the electricity and test for leaks.
Tips & Warnings
- Turn off the electricity to the well before starting.
How to Install Two Water Pressure Tanks Together
When you're operating on a water well, one challenge you have to contend with is a finite well supply. Well supply is...
How to Replace a Well Pump Pressure Gauge
Well pump pressure gauges is a useful tool in monitoring a properly functioning water system. The ability to see at a glance...
Installing a Water Pressure Tank
Installing a water-pressure tank will allow you to have even water pressure. It is a required part of a well system for...
How to Set the Pressure on a Well Pressure Tank
Setting the pressure in a water pressure tank is very important before installing the tank. The tank pressure must be lower than...
How to Determine the Pressure of a Water Tank
Pressurized water tanks are used to store water and regulate water pressure in homes that rely on pumping water from wells. They...
How to Replace a Pressure Gauge
A pressure gauge inside your home is a device commonly used to determine the amount of water pressure inside a water tank.... | 1.001421 | m-a-p/FineFineWeb |
Deleting all Data from a SQL Server Database (Part 2)
In part 2 of a series, On VB columnist Joe Kunk provides a Visual Basic program to generate a SQL script that clears all the data from a SQL Server database.
In part one of this article
, I described a scenario in which I needed to empty all the data from a copy of a large database on a remote server, in order to download a minimal backup and populate it with test data for a demonstration on my laptop in a location without a reliable Internet connection. I could have scripted the existing database to create a new one, but that required trust that the new database is an identical copy of the existing database. Creating a copy of the database and then deleting all the data gives me a much higher confidence that I have an empty database structurally identical to the original.
As promised in part one, I developed a Visual Basic program to generate a SQL script that clears all the data from a SQL Server database (See Code Download for full source code and executable). The program only generates a T-SQL script; it does not make any changes to the database structure or data directly when run. Print statements are included throughout the script to display a log of actions taken by the script when run. The script has been tested on SQL Server 2005 and SQL Server 2008 R2. The program is compiled against the x86 CPU and the.NET Framework 3.5 Client Profile so it will run on a standard Windows 7 computer without the need to install the.NET Framework 4. No other assemblies are needed.
The basic approach is to TRUNCATE each table, dropping and re-creating foreign key constraints and associated extended properties if needed. If not truncated, the table is cleared with a DELETE FROM statement and the identity seed field is explicitly reset to the original seed value. The order in which the tables are processed is determined based on the foreign key dependencies. The application generates the SQL script quickly since it is simply reading schema information to generate text for a script.
The choice of how to clear the table is made in part on the security permissions of the user's database connection. This utility is meant to be used by a developer or database administrator with access to SQL Server Management Studio, not an application user. The script assumes that the user has at least the permission to read and write all tables. See Listing 1 for the generated script to clear the Microsoft sample AdventureWorks database with the ALTER permission on the tables. See Listing 2 for the generated script to clear the same database but without the ALTER permission, as would be the case with only the DataReader and DataWriter database roles.
The T-SQL script to determine if the user has ALTER permissions must be called for each table. 'mytablename' is the name of the specific table being checked.
SELECT permission_name FROM fn_my_permissions('mytablename', 'OBJECT')
WHERE permission_name = 'ALTER' and subentity_name = ''
Running the Application
To run the application, copy the three files in the bin\Debug\ folder to the location of your choice, modify the connDatabase connection string value to specify the desired database, and run ClearDatabaseScriptUtiity.exe. The.config file has a sample connection string for both integrated security and SQL Login modes. For the AdventureWorks database with integrated security, the connection string would be similar to:
Data Source=(local);Initial Catalog=AdventureWorks;Integrated Security=True
Figure 1 shows the main application form. The drop down in the upper left corner allows you to specify which of the available databases on the server that you would like to use. The first four buttons display informational grids that can be used to validate the generated script. Click the "Copy to Clipboard" button to place the grid contents on the clipboard to be pasted into other applications such as Excel.
[Click on image for larger view.]
|Figure 1. The main application form.
The Table Counts button generates an alphabetical list of all tables by schema and table name, along with the current count of records in each table (see Figure 2). This grid provides a way to validate the record counts before and after the ClearDatabase script has been run. The list of tables is generated from the "connSQL.GetSchema(SqlClient.SqlClientMetaDataCollectionNames.Tables)" statement where connSQL is the SqlClient.SqlConnection object.
Foreign keys affecting a table must be dropped in order to TRUNCATE the table. The Foreign Key button generates an alphabetical listing of foreign key information sorted by the foreign key name and field index, as shown in Figure 3. This grid pulls together all the information needed to generate the scripts to both drop the foreign key constraints and re-create them after the table is truncated, including any extended properties attached to the foreign key constraints. These must be re-created as well if the foreign key constraint is dropped. This is the most complex query in the application. See Listing 3.
Indexed views must be schema bound to the referenced tables using the WITH SCHEMABINDING attribute. T-SQL User defined functions may optionally be schema bound to improve performance. Tables referenced in these situations cannot be altered to drop the foreign key constraints without dropping and re-creating the schema bound view or function. To avoid that, any schema bound tables use the DELETE FROM option. The Schema Bound button generates an alphabetical listing of tables affected by schema binding. See Figure 4 below.
The script to determine which tables are affected by schema binding is shown here:
SELECT DISTINCT db_name() as DatabaseName, SC.name as SchemaName, SY.Name as TableName
FROM sys.sql_expression_dependencies SD
INNER JOIN Sys.objects SY on SD.referenced_id = SY.Object_id
INNER JOIN Sys.schemas SC on SY.schema_id = SC.schema_id
INNER JOIN syscomments CM on CM.id = SD.referencing_id
WHERE is_schema_bound_reference=1 AND CM.text like '%schemabinding%'
ORDER BY SC.Name, SY.Name
Truncating a table is much faster than DELETE FROM when clearing a table, primarily because log transactions are not generated for each row. Truncating a table automatically resets any identity seed field to its original value. If the DELETE FROM is used, the script must explicitly reset the identity seed value to reflect the same behavior. The Identity Seed button generates an alphabetical listing of tables with identity seed values and the T-SQL command needed to reset it, as shown in Figure 5.
The script to determine the identity seed values is:
'[' + TABLE_SCHEMA + '].[' + TABLE_NAME + ']' as SchemaTable,
IDENT_SEED('['+TABLE_SCHEMA + '].[' + TABLE_NAME+']') AS Seed,
IDENT_CURRENT('['+TABLE_SCHEMA + '].[' + TABLE_NAME+']') AS [Current],
'DBCC CHECKIDENT (''[' + TABLE_SCHEMA + '].[' +
TABLE_NAME + ']'',RESEED,' +
CAST(IDENT_SEED('['+TABLE_SCHEMA + '].[' + TABLE_NAME+']') as VarChar(8)) + ')'
WHERE IDENT_SEED('['+TABLE_SCHEMA + '].[' + TABLE_NAME+']') IS NOT NULL
ORDER BY TABLE_SCHEMA,TABLE_NAME
The Show Script button generates the actual T-SQL script to be used in SQL Server Management Studio to clear the database. See Figure 6. This button can be the first one clicked after selecting the database in the drop down. The four buttons to the left are informational and do not need to be clicked in order to generate the final script. Click the "Copy to Clipboard" button and then paste the text into a SQL Server Management Studio query window.
The Application Code
The Visual Basic solution to generate the script in Listings 1 and 2 is shown in Figure 7. The Data Access Layer (DAL) project interacts with the database and contains most of the application logic. The LibADO.vb file provides two general functions to easily return dynamic query results based on just the connection string, the query string, and an empty DataTable or integer. The ClearDatabaseScriptUtility project serves as the graphical user interface and has minimal code.
SQLServer.CreateClearDatabaseScript is the main method in the DAL project. It uses a list of ClearTableAction objects to represent each table and the pre-processing and post-processing script needed to clear the tables. CreateClearDatabaseScript follows the logic:
- Creates a StringBuilder initialized with 10K of string space to hold the final script but can grow larger.
- Gets the table names for the selected database and initializes the List(Of ClearTableAction).
- Sets each ClearTableAction to a TRUNCATE command, but then changes to a DELETE FROM command for any table that is schema bound.
- If the user has ALTER permission, adds script to disable any triggers prior to the table clear action and enables them after it is completed. See Listing 4 for the script to list the triggers on all tables.
- Gets the list of foreign key constraints.
- Multiple field foreign key constraints are combined into a single row.
- Any extended properties such as comments are combined and kept to be created with the constraint.
- Sort them by dependency so that all tables using a particular key are cleared before the key itself.
- If the user has ALTER permission, script to drop the foreign key constraint, truncate the table, and then recreate the foreign key constraint and any foreign key constraint extended properties.
- Loops through the List(Of ClearTableAction) and scripts the pre-processing and clear action for each table.
- Loops through the List(Of ClearTableAction) and scripts the post-processing for each table.
I used several coding techniques in the DAL that I would like to point out.
For an object that was returning the result of a SQL query, if I needed a field that could be derived from the existing query data | 1.181386 | openbmb/Ultra-FineWeb |
Nanobubbles at the interface between water and a hydrophobic solid.
Xue Hua Zhang, Anthony Quinn, William A Ducker
Langmuir | Published : 2008
A very thin layer (5-80 nm) of gas phase, consisting of discrete bubbles with only about 40 000 molecules, is quite stable at the interface between a hydrophobic solid and water. We prepare this gas phase from either ambient air or from CO(2)(g) through a solvent exchange method reported previously. In this work, we examine the interface using attenuated total internal reflection infrared spectroscopy. The presence of rotational fine structure in the spectrum of CO(2) and D(2)O proves that molecules are present in the gas phase at the interface. The air bubbles are stable for more than 4 days, whereas the CO(2) bubbles are only stable for 1-2 h. We determine the average gas pressure inside t..View full abstract | 1.501077 | openbmb/Ultra-FineWeb |
Portfolio > Economy & Markets
Why the Debt Ceiling Is Important, and Its Impact on the Markets and Economy
Your article was successfully shared with the contacts you provided.
Congress on both sides of the aisle is playing a game of political chicken with the debt ceiling (see latest developments here); but what would actually hitting the ceiling mean for the markets and the economy in general?
Although the U.S. hit its $14.3 trillion debt ceiling on Monday, May 16, economic Armageddon hasn't yet rained down on the U.S. economy. Thanks to some slick Treasury Department maneuvering, the date when the U.S. really reaches the limit has been pushed to around August 2.
But instead of breathing a sigh of relief and resolving to engage in a bipartisan effort to resolve the debt ceiling issue in advance of the August drop-dead date, both sides are likely to wait until the last moment to avoid impact—threatening our fragile economic recovery in the process.
What Happens if We Hit the Debt Ceiling?
According to Treasury Secretary Timothy Geithner, reaching the ceiling would force the government to default on some of its obligations, which would have a "catastrophic economic impact."
In the—however unlikely—worst case scenario, if the federal government is unable to borrow additional money, it could default on some of its obligations. Funds brought in from new debt issues are used to make principal and interest payments on the national debt. Without the ability to borrow additional funds, the Treasury could be forced to default on some of its debt.
Last week, the ratings service Moody's warned that "if there is no progress on increasing the statutory debt limit in coming weeks, it expects to place the U.S. government's rating under review for possible downgrade, due to the very small but rising risk of a short-lived default."
Despite assurances by some Republicans and othersthat default would have little concrete effect on government operations or the economy, the effect would likely be devastating. A U.S. default would ripple through the world economy and markets, destabilizing every corner of the markets. Trillions in capital would vanish from world markets in the blink of an eye.
"Without an increase in the debt limit, the Treasury would be unable to meet all of the government's existing obligations, which could undermine the U.S. government's reputation in capital markets and raise costs of federal borrowing," according to the Congressional Research Service's recent history on the debt limit. An increase in the cost of borrowing for the U.S. would then increase the cost of borrowing for corporations and other borrowers, stunting economic growth.
Is it Time to Eliminate the Debt Ceiling?
Some commentators have suggested eliminating the debt ceiling, arguing that other checks and
balances have replaced the need for the limit. The ceiling was first put into to place to prevent continuous financing battles in Congress that could hamper the Treasury's ability to fund the government's operations.
But the debt ceiling plays a valuable role in Washington by forcing periodic debate and compromise in Congress and forcing our elected officials to justify their unbalanced budgets.
Prior to World War I, Congress had to approve every debt issuance. The debt ceiling was introduced to give the Treasury the flexibility it needs to finance the federal government without being forced to seek Congressional approval at every turn.
As evidenced by the widespread coverage of Congress's debt talks in the mainstream media, the debt ceiling still serves an important purpose by forcing our elected officials to publicly justify their spending.
So, Will We Hit the Ceiling?
The answer is "Probably not." The debt ceiling has been raised about 80 times since it was enacted in the early 20th century. It's hard to believe that Republicans are doing anything here other than playing political chicken with the debt ceiling. August 2 gives them enough time to make their pre-2012 elections point, get some spending cut concessions from the Democrats, and then sign on to an increase.
But all is not well that ends well. The longer Republicans wait to increase the debt ceiling, the greater the chance the country's cost of borrowing will increase. Approaching the absolute drop-dead date of August 2 and then increasing the limit at the 11th hour increases uncertainty in the markets and will likely push up the government's cost of borrowing.
Although the markets didn't see a precipitous drop in the lead up to the soft May 16 deadline, you can bet that stress in the markets will increase exponentially the closer we get to August 2 without an agreement to extend or eliminate the ceiling.
See also The Law Professor's blog at AdvisorFYI. | 1.00439 | Zyphra/Zyda-2 |
Catholic Parishes in Prince Edward Island
by Father Alfred E. Burke, ca. 1885
During the mid-1880's, Father Alfred E. Burke wrote histories for most of the Catholic parishes of that time on Prince Edward Island. The reasons for doing so are mentioned in the Introduction, below. These histories, copied and typed by Professor J-Henri Blanchard, most likely during the 1950's, have survived and are available on microfilm (Microfilm number F-1045) at the Centre d'études acadiennes in Moncton. The originals are held by the Diocesan Archives in Charlottetown. Copies may also be viewed at the PEI Public Archives and Records Office.
These histories, as originally intended [see the Introduction], were never published. There is no mention of either the Magdalen Island parishes or the biographies of the priests in the transcript I have seen. Likewise, it is doubtful that the histories were ever formally published.
Although his sources are not listed, the material was probably derived from popular accounts and the memories of elder parishioners, with some data from church records interspersed among it. Even though it is, to some extent, second or third-hand information it is useful nonetheless. At the time Father Burke set these histories to paper, most of the people residing within these parishes were only a generation or two removed from the original founders. In some cases the oldest members of a parish may have been the founders themselves, or their children, and thus were witnesses to what Father Burke recorded.
I am presenting Father Burke's history as it appears in Prof. Blanchard's typed transcriptions with one exception - the formatting of paragraphs. These I have broken down into smaller, more logical, units to make reading easier. However, I have not touched the original sentence structure, punctuation, spelling or grammar.
The individual parish histories, as well as the introduction written by Father Burke, can be viewed using the links below in the table of contents. This is a copy of the same table of contents that appears in Prof. Blanchard's transcription.
Unfortunately, I did not have time to copy all of Prof. Blanchard's transcripts. Therefore, a handful of the histories are not included below. These can be identified by the lack of hyperlink to the actual parish history (but are not marked as "lost" or "missing.")
Readers should note that the files marked as "lost" or "missing" were just that when Prof. Blanchard created his transcript. Years afterwards - and after I completed my own transcriptions - copies of Father's Burke's original handwritten notes were located at the Centre d'études acadiennes in Moncton, NB. The missing parishes were included amongst the originals.
The complete history is now available online and for download in PDF format at _URL_
I want to thank Claudia Boorman for having brought the online file to my attention.
TABLE OF CONTENTS
Introduction - 1 page
Alberton (Mission of the Sacred Heart) - 3 pages
Baldwin's Road (Mission of St. Theresa) - 7 pages
Bear River (Mission of St. Margaret) - 8 pages
Bloomfield - Lost
Brae (Mission of St. Mary) - 2 pages
Cardigan Bridge (Mission of All Saints) - 4 pages
Corran Bann (Mission of St. Michael) - 2 pages
Cove Head (Mission of St. Eugene's) - 1 page
Charlottetown (The Cathedral Parish of St. Dunstan's) - 37 pages
De Sable (Mission of St. Joseph's) - 6 pages
East Point (Mission of St. Columba) - 9 pages
Egmont Bay - Missing
Fort Augustus (Mission of St. Patrick) - 5 pages
Freetown (Mission of the Holy Magi) - 2 pages
Georgetown (Mission of St. James) - 6 pages
Hope River (Mission of St. Anne) - 4 pages
Indian River (Mission of St. Mary) - 5 pages
Kincora (Mission of St. Malachy) - 8 pages
Little Pond (Mission of St. Francis) - 2 pages
Miscouche - Missing
Montague Bridge (Mission of St. Mary) - 3 pages
Montague West (Mission of St. Michael) - 5 pages
Mont-Carmel - Lost
Morell (Mission of St. Lawrence) - 5 pages
Palmer Road (Mission of St. Thomas) - 3 pages
Rollo Bay (Mission of St. Alexis) - 8 pages
Rustico - Lost
St. Andrew's (Mission of St. Andrew's) - 9 pages
St. George - Missing
St. Peter's Bay (St. Peter's Mission) - 6 pages
Seven Mile Bay (Mission of St. Peter) - 3 pages
Souris (Mission of St. Mary) - 5 pages
Sturgeon (Mission of St. Paul) - 4 pages
Tignish - Missing
Township Seven (Mission of St. Mark) - 3 pages
Township Eleven (Mission of St. Bridget) - 3 pages
Tracadie (Mission of St. Bonaventure) - 5 pages
Vernon River (Mission of St. Joachim) - 6 pages
NOTE: The number of pages referred to in the Table of Contents is in regard to the length of the typed copies by Prof. Blanchard, and is his notation, not mine. His transcripts were type-written, double spaced, on regular 8.5 x 11 inch sheets. | 1.879447 | m-a-p/FineFineWeb |
What is rose leaf rolling sawfly?
Rose leaf-rolling sawfly is an insect pest of roses. Female sawflies insert eggs into rose leaflets, and while doing so, secrete chemicals that induces the leaf rolling. Caterpillar-like larvae emerge from the eggs and feed within the rolled leaflets.
Attacks are particularly severe if there is warm weather during the egg-laying period in late spring-early summer, as this increases the sawfly's activity.
Light infestations can usually be tolerated, but where a large proportion of foliage has been affected, the plant may suffer a loss of vigour.
Non chemical control
Pick off affected leaves and dispose of them before the larvae complete their feeding; this is only feasible when comparatively few leaves are affected. The removal of large numbers of leaves would be more harmful to the rose than the pest damage. Cultivation of the soil around roses during the winter may expose overwintering larvae, but may also damage the roots and encourage suckering.
It can be difficult to prevent the females laying eggs and initiating the damage. Pesticides often do not reach the larvae in the rolled leaves.
Deltamethrin (Bayer Sprayday Greenfly Killer) or lambda cyhalothrin (Westland Plant Rescue Fruit & Vegetable Bug Killer) will control the adult sawflies but as these are active over a six to eight week period in late spring-early summer, several applications would be necessary to prevent egg-laying.
The systemic insecticide thiacloprid (Provado Ultimate Bug Killer Ready To Use or Provado Ultimate Bug Killer Concentrate 2) may control the larvae feeding inside the rolled leaves.
Pesticides for gardeners (Adobe Acrobat pdf document outlining pesticides available to gardeners)
The adult sawfly is 3-4mm (about 1/8in) long, black in colour with two pairs of transparent wings. The females insert eggs into the leaflets during late April to early June and while doing so secrete chemicals that induce the leaf rolling.
The eggs hatch into pale green caterpillars, which grow up to 10mm (about 3/8in) long as they feed inside the rolled leaflets. During late June and July the larvae go down into the soil where they overwinter as non-feeding larvae before pupating in the spring. There is one generation per year. | 2.073966 | HuggingFaceFW/fineweb-edu |
Lowland Rain Forest
This site is under redevelopment. Its content is from 1998, but we will be updating it in the near future.
Many trees in lowland tropical rain forest have tall, straight trunks, especially the dipterocarps (far left) that are marketed as Philippine mahogany.
Climate is largely responsible for shaping the four widespread and several restricted types of habitats that originally occurred in the Philippines. Scroll down to learn about the different types of habitat, or jump to specific habitats using the links on the right.
If you are able to identify people, objects, or contribute to the current description that is shown, please add this information below. | 1.893792 | Zyphra/Zyda-2 |
Welcome to ComputerPedia™ -- The Computer Encyclopedia
Provide consumers with faster, easier access to the information, products and services they want.
We search the major search engines and remove the duplicates, the advertising sites, the pop-up ads, and anything that might harm your computer. Then we include all the related products and services in this easy-to-remember place where you spend less time searching, and more time finding what you want.
Computer News Links:
Powered by PediaNetwork®
A computer is a machine that manipulates data according to a list of instructions.
The first devices that resemble modern computers date to the mid-20th century, although the computer concept and various machines similar to computers existed earlier. Early electronic computers were the size of a large room, consuming as much power as several hundred modern personal computers(PC). Modern computers are based on tiny integrated circuits and are millions to billions of times more capable while occupying a fraction of the space. Today, simple computers may be made small enough to fit into a wristwatch and be powered from a watch battery. Personal computers, in various forms, are icons of the Information Age and are what most people think of as "a computer"; however, the most common form of computer in use today is the embedded computer. Embedded computers are small, simple devices that are used to control other devices for example, they may be found in machines ranging from fighter aircraft to industrial robots, digital cameras, and children's toys.
The ability to store and execute lists of instructions called programs makes computers extremely versatile and distinguishes them from calculators. The Churchï Turing thesis is a mathematical statement of this versatility: any computer with a certain minimum capability is, in principle, capable of performing the same tasks that any other computer can perform. Therefore, computers with capability and complexity ranging from that of a personal digital assistant to a supercomputer are all able to perform the same computational tasks given enough time and storage capacity.
History of Computer Hardware:
The history of computer hardware encompasses the hardware, its architecture, and its impact on software. The elements of computing hardware have undergone significant improvement over their history. This improvement has triggered worldwide use of the technology, performance has improved and the price has declined. Computers are accessible to ever-increasing sectors of the world's population. Computing hardware has become a platform for uses other than computation, such as automation, communication, control, entertainment, and education. Each field in turn has imposed its own requirements on the hardware, which has evolved in response to those requirements.
The von Neumann architecture unifies our current computing hardware implementations. Since digital computers rely on digital storage, and tend to be limited by the size and speed of memory, the history of computer data storage is tied to the development of computers. The major elements of computing hardware implement abstractions: input, output, memory, and processor. A processor is composed of control and datapath. In the von Neumann architecture, control of the datapath is stored in memory. This allowed control to become an automatic process; the datapath could be under software control, perhaps in response to events. Beginning with mechanical datapaths such as the abacus and astrolabe, the hardware first started using analogs for a computation, including water and even air as the analog quantities: analog computers have used lengths, pressures, voltages, and currents to represent the results of calculations. Eventually the voltages or currents were standardized, and then digitized. Digital computing elements have ranged from mechanical gears, to electromechanical relays, to vacuum tubes, to transistors, and to integrated circuits, all of which are currently implementing the von Neumann architecture.
It is difficult to identify any one device as the earliest computer, partly because the term "computer" has been subject to varying interpretations over time. Originally, the term "computer" referred to a person who performed numerical calculations (a human computer), often with the aid of a mechanical calculating device.
The history of the modern computer begins with two separate technologies - that of automated calculation and that of programmability.
Examples of early mechanical calculating devices included the abacus, the slide rule and arguably the astrolabe and the Antikythera mechanism (which dates from about 150-100 BC). Hero of Alexandria built a mechanical theater which performed a play lasting 10 minutes and was operated by a complex system of ropes and drums that might be considered to be a means of deciding which parts of the mechanism performed which actions and when. This is the essence of programmability.
The "castle clock", an astronomical clock invented by Al-Jazari in 1206, is considered to be the earliest programmable analog computer. It displayed the zodiac, the solar and lunar orbits, a crescent moon-shaped pointer travelling across a gateway causing automatic doors to open every hour, and five robotic musicians who play music when struck by levers operated by a camshaft attached to a water wheel. The length of day and night could be re-programmed every day in order to account for the changing lengths of day and night throughout the year.
The end of the Middle Ages saw a re-invigoration of European mathematics and engineering, and Wilhelm Schickard's 1623 device was the first of a number of mechanical calculators constructed by European engineers. However, none of those devices fit the modern definition of a computer because they could not be programmed.
In 1801, Joseph Marie Jacquard made an improvement to the textile loom that used a series of punched paper cards as a template to allow his loom to weave intricate patterns automatically. The resulting Jacquard loom was an important step in the development of computers because the use of punched cards to define woven patterns can be viewed as an early, albeit limited, form of programmability.
It was the fusion of automatic calculation with programmability that produced the first recognizable computers. In 1837, Charles Babbage was the first to conceptualize and design a fully programmable mechanical computer that he called "The Analytical Engine". Due to limited finances, and an inability to resist tinkering with the design, Babbage never actually built his Analytical Engine.
Large-scale automated data processing of punched cards was performed for the U.S. Census in 1890 by tabulating machines designed by Herman Hollerith and manufactured by the Computing Tabulating Recording Corporation, which later became IBM. By the end of the 19th century a number of technologies that would later prove useful in the realization of practical computers had begun to appear: the punched card, Boolean algebra, the vacuum tube (thermionic valve) and the teleprinter.
During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers.
A succession of steadily more powerful and flexible computing devices were constructed in the 1930s and 1940s, gradually adding the key features that are seen in modern computers. The use of digital electronics (largely invented by Claude Shannon in 1937) and more flexible programmability were vitally important steps, but defining one point along this road as "the first digital electronic computer" is difficult (Shannon 1940). Notable achievements include:
Konrad Zuse's electromechanical "Z machines". The Z3 (1941) was the first working machine featuring binary arithmetic, including floating point arithmetic and a measure of programmability. In 1998 the Z3 was proved to be Turing complete, therefore being the world's first operational computer.
The non-programmable Atanasoff�Berry Computer (1941) which used vacuum tube based computation, binary numbers, and regenerative capacitor memory.
The secret British Colossus computers (1943), which had limited programmability but demonstrated that a device using thousands of tubes could be reasonably reliable and electronically reprogrammable. It was used for breaking German wartime codes.The Harvard Mark I (1944), a large-scale electromechanical computer with limited programmability.
The U.S. Army's Ballistics Research Laboratory ENIAC (1946), which used decimal arithmetic and is sometimes called the first general purpose electronic computer (since Konrad Zuse's Z3 of 1941 used electromagnets instead of electronics). Initially, however, ENIAC had an inflexible architecture which essentially required rewiring to change its programming.
Several developers of ENIAC, recognizing its flaws, came up with a far more flexible and elegant design, which came to be known as the "stored program architecture" or von Neumann architecture. This design was first formally described by John von Neumann in the paper First Draft of a Report on the EDVAC, distributed in 1945. A number of projects to develop computers based on the stored-program architecture commenced around this time, the first of these being completed in Great Britain. The first to be demonstrated working was the Manchester Small-Scale Experimental Machine (SSEM or "Baby"), while the EDSAC, completed a year after SSEM, was the first practical implementation of the stored program design. Shortly thereafter, the machine originally described by von Neumann's paper was completed but did not see full-time use for an additional two years.
Nearly all modern computers implement some form of the stored-program architecture, making it the single trait by which the word "computer" is now defined. While the technologies used in computers have changed dramatically since the first electronic, general-purpose computers of the 1940s, most still use the von Neumann architecture.
Computers that used vacuum tubes as their electronic elements were in use throughout the 1950s. Vacuum tube electronics were largely replaced in the 1960s by transistor-based electronics, which are smaller, faster, cheaper to produce, require | 2.204895 | openbmb/Ultra-FineWeb |
Paul: Mission and Message
Drawing on Old Testament prophetic messages, Jewish history, and the life and teachings of Jesus, Paul developed the Christian concept of salvation history, all centered on the life, death, and resurrection of Christ. Because of his cultural background in both Judaism and in Greco-Roman society, Paul possessed sufficient insights to allow him to lift the gospel out from the complexity of Hebrew civil, ritual, and moral practices of Jewish life and make it more accessible to a multicultural world.
Paul's 13 letters to the believers applied faith to their lives. He touched doctrinal, as well as practical, topics. He counseled, encouraged, and admonished on matters of personal Christianity, relationships, and church life. Nevertheless, throughout his letters his main theme was "Jesus Christ and Him crucified" (1 Cor. 2:2, NKJV).
Paul was not only a man of letters. He also became known as the apostolic missionary par excellence, witnessing to the gospel from Syria to Italy, perhaps even to Spain. Within a decade, Paul established churches in four provinces of the Roman Empire.
This week we will take a look at Paul—both his mission and his message.
* Study this week's lesson to prepare for Sabbath, September 19.
In the Exodus from slavery in Egypt, God worked remarkable signs of providential care for Israel. Later generations of Jews developed the expectation that any new messenger sent from God should make themselves known by signs and wonders and miracles.
In contrast, in line with their philosophical and scientific heritage, Greeks sought a rational basis for belief, one that would satisfy the demands of human wisdom.
Paul did not dismiss the cultural and spiritual heritage of his target peoples but used it as an entry point for proclaiming Christ crucified. Those who desired signs found them in the life and ministry of Jesus and in the early church. Those who wanted logical elegance and rationality found it in Paul's arguments for the gospel message. Both types of persons ultimately had only one need, and that was to know the risen Christ and "the power of his resurrection" (Phil. 3:10). How Paul brought them to that knowledge depended upon the people to whom he was witnessing.
When Paul preached to Jewish listeners, he based his sermons on the history of Israel, linking Christ to David, and emphasizing the Old Testament prophecies pointing to Christ and foretelling His crucifixion and resurrection (Acts 13:16–41). That is, he started out with what was familiar to them, with what they revered and believed, and from that starting point he sought to bring them to Christ.
For Gentiles, Paul's message included God as Creator, Upholder, and Judge; the entry of sin into the world; salvation through Jesus Christ (Acts 14:15–17, 17:22–31). Paul had to work from a different starting point with these people than he did with the Jews (or with Gentiles who believed in the Jewish faith). Here, too, though, his goal was to lead them to Jesus.
As a skilled communicator, Paul in his mission work used the familiar to explain the unfamiliar. He took everyday features of the Greco-Roman world to illustrate the practical reality of new life in Christ. He drew especially from two areas of his converts' world for his teaching metaphors— athletes with their games and the ever-present Roman soldier.
Fondness for athletic accomplishments gripped Paul's world, much as it does ours. Ancient Greeks transmitted their love of competition by holding, over the centuries, no fewer than four separate cycles of Olympic-type contests, located in different parts of Greece. Romans inherited and further promoted athletic competition. Foot races were the most popular events and included a race of men wearing full suits of military armor. Wrestling also was popular. Athletes trained assiduously, and winners were richly rewarded. Ethnicity, nationality, and social class mattered little, since endurance and performance were the goals.
Starting with Marius, Roman emperors replaced temporary soldiers with full-time career warriors, garrisoned them across the Roman Empire, and upgraded and standardized their armor and weapons. By Paul's time, soldiers were recruited from various ethnic and national groups, whether or not they were Roman citizens. In return for rewards at the end of their term of service, soldiers pledged total loyalty to the ruling emperor, who in times of conflict personally led them into battle.
In what is perhaps Paul's final letter, he applied both soldiering and athletics to his own view of his life as a Christian missionary: "I have fought the good fight, I have finished the race, I have kept the faith" (2 Tim. 4:7, NIV).
In English translations of Paul's letters, the word law appears about one hundred thirty times, and in Acts of the Apostles, about twenty times. Paul endeavored to get his hearers and readers, regardless of cultural background, to understand that "law" carried several meanings, especially for Jews. Laws such as the Ten Commandments are in force for all people at all times. But other kinds of laws in the Old Testament and in Jewish culture, Paul did not consider in force for Christians.
In his writings, the apostle used the word law broadly in reference to rules for religious ceremonies, civil law, health laws, and purification laws. He wrote about being "under the law" (Rom. 3:19) and about being "released from the law" (Rom. 7:6, NIV). He described a "law of sin" (vs. 25) but also "law [that] is holy" (vs. 12). He mentioned the "law of Moses" (1 Cor. 9:9) but also the "law of God" (Rom. 7:25). Confusing as these phrases may seem to non-Jews, for the Jewish believer brought up in the Hebrew culture, the context would make clear which law was meant.
Paul realized that the ceremonial laws, detailing how one approached God through priesthood, Hebrew sanctuary, and sacrifices, ceased to be valid after the Crucifixion. They had served their purpose in their time but were now no longer needed. (This point would become especially apparent after the destruction of the temple.)
With the moral law expressed by the Ten Commandments, however, matters are different. In his letters, Paul quotes some of the Ten Commandments and alludes to others as universal ethical demands on all people, Jewish as well as Gentile. Having written against the practice of sin, Paul would not in any way have diminished the very law that defines what sin is. That would make about as much sense as telling someone not to violate the speed limit while at the same time telling them the speed limit signs are no longer valid.
No question, the cross of Christ was central to all that Paul lived and taught. But Paul didn't teach the Cross in a vacuum; instead, he taught it in the context of other teachings, as well; and one of them, perhaps the one most intricately linked to the Cross, was the Resurrection, without which the Cross would have been in vain.
Unfortunately, the majority of Christian traditions, as well as non- Christian religions, believe strongly in the immortality of the human soul. Against this belief, however, Paul emphasized repeatedly that:
Worship in almost all religions includes numerous false teachings based on the false concept of the immortality of the soul. These errors include things such as reincarnation, praying to saints, veneration of ancestral spirits, an eternally burning hell, and many New Age practices, such as channeling or astral projection. A true understanding of the Bible's teaching on death is the only real protection against these great deceptions. How unfortunate, too, that those who show the strongest inclination against accepting this truth are Christians of other denominations.
Paul was a hard worker with a strong personality and singleness of purpose. Such persons can be loners with few friends but many admirers. However, on his travels, two or three fellow workers often accompanied Paul. At least eight of these close fellow workers are mentioned by name (Acts 13:2; 15:22, 37; 16:1–3; 19:22; Col. 4:7, 10, 11; Philem. 24). To this must be added Paul's greetings to 24 people in Romans 16, in addition to general greetings to households.
The apostle believed in teamwork, especially in pioneering situations. At the same time, however, he did at times have conflict with fellow laborers.
"It was here that Mark, overwhelmed with fear and discouragement, wavered for a time in his purpose to give himself wholeheartedly to the Lord's work. Unused to hardships, he was disheartened by the perils and privations of the way. . . . This desertion caused Paul to judge Mark unfavorably, and even severely, for a time. Barnabas, on the other hand, was inclined to excuse him because of his inexperience. He felt anxious that Mark should not abandon the ministry, for he saw in him qualifications that would fit him to be a useful worker for Christ."—Ellen G. White, The Acts of the Apostles, pp. 169, 170.
The account in Acts reveals that Paul expected his companions to persevere in the toils and perils of their mission. For Paul, the close team constituted a church in miniature. He stressed the importance of setting a good example, the imitation model of mission. Dutiful yet loving relationships among team members became a pattern for the churches, which were often based on households. The team also provided an ideal setting for the training | 2.001429 | m-a-p/FineFineWeb |
The new high-pressure pump Multitec DN 200 will be available with four different hydraulic systems and two to six stages. It is designed for casing pressures of up to 40 bar and suitable for use in water supply, irrigation, pressure boosting or fire-fighting systems. Its maximum head is nearly 400 m and its maximum flow rate is 850 m³/h.
The pumps are driven by four-pole three-phase motors with a maximum power rating of 1000 kW. The advantage of the new pumps compared with higher-speed pump sets with smaller nominal diameters is that, running at 1450 (50 Hz) or 1750 (60 Hz) rotations per minute, they will reach the same maximum flow rate, yet run much more quietly. Their low rotational speeds help reduce wear and have a positive impact on both the service life and the efficiency of the pump set. Energy efficiency, a long service life and low noise emissions are very important to customers in the water sector.
The maximum temperature of the fluid handled should not exceed 60 °C. The pumps sets have a maximum weight of 1.6 tonnes, with casings made of nodular cast iron and impellers made of bronze. To cover as broad a spectrum of applications as possible, the pump sets can be equipped with a variety of standardised single mechanical seals or gland packings. The diffusers can be supplied in grey cast iron or bronze, and their transition areas are protected against wear by casing wear rings. The pump shaft is supported by ball bearings at both ends. | 1.104202 | m-a-p/FineFineWeb |
We gratefully acknowledge support from
the Simons Foundation and member institutions.
Full-text links:
Current browse context:
Change to browse by:
References & Citations
(what is this?)
Astrophysics > Cosmology and Nongalactic Astrophysics
Title: Observational Aspect of Black Hole Dark Matter
Authors: Leo Stodolsky
Abstract: Advances in high angular resolution astronomy make it conceivable that black hole dark matter could be detected via angular deviation effects. Assuming the dark matter in the galaxy is made of solar mass black holes, there is a non-trivial probability that a line-of-sight through the galaxy, leads to micro-arcseconds deviations, a value that has been discussed for various astronomical projects. In cosmology the effects are magnified by an increased density at early times and an opening of angles due to redshift. If the dark matter is made of primordial black holes, present at the CMB, random deflections of the CMB photons lead to a limit on the angular resolution, approximately ${3}\times 10^{-7} \sqrt{M/M_\odot}\, rad$, with $M$ the mass of the black holes. Using the resolutions of $\sim 10^{-3} rad$ demonstrated in observations of the "acoustic peaks" then implies the limit $(M/M_\odot)\lesssim 10^{7}$. While this large value seems uninteresting, improved resolutions would lead to significant limits or conceivably the discovery of primordial black holes.
Comments: eight pages, no figures. v2 includes many small revisions and clarifications. A section was added to emphasize the difference between the CMB power spectrum and the CMB resolution. The long delay between first posting and publication is mainly due to the Elsevier firm losing the submission for the better part of a year
Journal reference: Mod.Phys.Lett. A, vol 36, no. 11 (April 2021)
Cite as: arXiv:1912.01325 [astro-ph.CO]
(or arXiv:1912.01325v2 [astro-ph.CO] for this version)
Submission history
From: Leo Stodolsky [view email]
[v1] Tue, 3 Dec 2019 11:58:07 GMT (9kb)
[v2] Fri, 5 Mar 2021 13:20:43 GMT (12kb)
Link back to: arXiv, form interface, contact. | 1.796751 | Zyphra/Zyda-2 |
Wastewater treatment plants
The wastewater treatment plant is a facility or group of facilities constructed or adapted to reduce the amount of pollutants from wastewater.
- The water enters the first compartment of the septic tank where settling processes occur, separation, fermentation and degradation;
- Latest technology ;
- Treatment efficiency 99% ;
- Purified water can be used for irrigation ;
- Minimum consumption of electricity ;
- Does not require other substances for proper functioning ;
- Minimum maintenance ;
- Emptying every 2 years.
ORDER NOW a wastewater treatment plant !
WATER TREATMENT AND STORAGE SYSTEMS | 1.684396 | m-a-p/FineFineWeb |
No review of the socioeconomic dimension of the American Revolution can afford to ignore the wartime debtor-creditor confrontation between tobacco planters and English and Scottish merchants. Many Virginians, as George Mason reminded Patrick Henry, fought the war to get rid of these debts.87 If the British creditors were tenacious in pressing the besieged government at home for their pound of flesh, the American debtors employed every legal dodge to evade payment. This brought a swarm of hornets about the ears of the federal court judges, their repeated decisions in favor of creditors notwithstanding. As minister plenipotentiary to Great Britain, Chief Justice Jay found it expedient to strip the Supreme Court of final jurisdiction over such issues, and the treaty which bears his name permitted appeals to a mixed commission. The United States would assume payments for all debts validated by the commission. Thereby many debtors managed to socialize their debts, even though the government succeeded in scaling down its obligations. The issue of the planters' debts is perhaps the best example of how the Revolution redistributed liabilities rather than assets.88
The case for significant social change during the Revolution still needs to be made. One could point to the insolvency laws, to the democratization of education, and to church disestablishment and religious liberty legislation. Indubitably, reform in these diverse categories helped create a more egalitarian and pluralistic society.
In no area is the social effect of the American Revolution more visible than in the opportunities for new men to enter government, business, and the professions. The Revolution brought all the "dregs" to the top, complained a Philadelphia grandee.89 Some who enjoyed a precipitous rise capitalized on the special opportunities the Revolution afforded in privateering, war manufacturing, and provisioning, on the new trade patterns resulting from the war, and on speculative opportunities provided by wartime and postwar inflation. A large body of statistical evidence is now available to show how state legislatures were altered to the advantage of newly settled areas and of men of less-established families. Statistics document some displacement of the old colonial "upper" class."90 James Madison, without benefit of a computer, had long before reached this same conclusion. In his Sixty-second Federalist he stressed "the mutability in the public councils," which he attributed to "a rapid succession of new members." "Every new election in the States," he said "is found to change one half of the representatives."
The biographical record also demonstrates how the Revolutionary War brought a transformation in politics, business, and the professions. Consider that populist prototype, the New Yorker Abraham Yates, Jr., always an object of venom among the Federalists, who reserved for him choice epithets ranging from the "late cobbler of laws and old shoes" to "an old booby." Apprenticed to a shoemaker, he became a lawyer, and as sheriff allied himself with Robert Livingston, Jr. in the skirmishes against the so-called tenant rebels. A central figure in putting the new state government into operation, he proved both in Congress and in the state legislature an unreconstructed anti-Federalist. Or take the Irish redemptioner Matthew Lyon, whose pugnacity, enterprise, and leadership (not to speak of an influential second marriage) elevated him within a decade after war's end to an established position in his region, even if his affluence failed to render some of his coarser habits acceptable. That orphaned backwoodsman Andrew Jackson, who would spend more time on horseracing and cockfighting than on Blackstone, was admitted to practice in 1788 after two years of haphazard tutelage, adjudged by the court to be "a person of unblemished moral character, and competent... [in] knowledge of the law." And why not Henry Clay? That barefoot boy of the Slashes in old Hanover was left by his mother at the age of fifteen in the office of the Virginia Court of Chancery. As he recalled it, he started his practice in Lexington in 1797 "without patrons, without favor or countenance of the great or opulent [and] without the means of paying my weekly board." Jackson's and Clay's was a vastly different era from the prewar years. A transformed society had spawned a new breed of professionals and politicians.91
A people's revolution achieved more than independence and nationhood. It brought new men to power, raised people's political aspirations, made the new governments of the Revolution more responsive to social inequities, and underpinned the notion of the sovereign people as the constituent power, of which the Preamble of the Federal Constitution is the most eloquent affirmation.
4. Among others, John Jay, for example, felt impelled to deny, as late as December 1775, that the Continental Congress aimed at independence. "Proofs that the Colonies Do Not Aim at Independence" [Philadelphia, after December 11, 1775], in Richard B. Morris et al., eds., John Jay: The Making of a Revolutionary, Unpublished Papers, _PHONE_ (New York, 1975), 198-201. [back to text]
6. Gordon S. Wood, "The Democratization of Mind in the American Revolution," in Leadership in the American Revolution, Library of Congress Symposia on the American Revolution (Washington, 1974), 84. [back to text]
7. See my Seven Who Shaped Our Destiny: The Founding Fathers as Revolutionaries (New York, 1975); "The American Dream Among Nations--What Promise? What Fulfillment?" in America at 200, Foreign Policy Association, Headline Series, number 227 (New York, 1975), 3-35. [back to text]
8. Thomas Jefferson to the Secretary of the Treasury, January 13, 1807, in Paul Leicester Ford, ed., Writings of Thomas Jefferson (New York, 1892-99), 9:7. See also Joan Hoff Wilson, "The Illusion of Change: Women and the American Revolution," in Alfred F. Young, ed., The American Revolution: Explorations in the History of American Radicalism (DeKalb, Ill., 1976), 383-445. [back to text]
9. The view of Winthrop D. Jordan, White Over Black: American Attitudes Toward the Negro, _PHONE_ (Chapel Hill, 1968), 342, 244, that the success of antislavery in the last quarter of the eighteenth century was almost within reach is controverted by David Brion Davis, The Problem of Slavery in the Age of Revolution, _PHONE_ (Ithaca, 1975), 255-57, and Edmund S. Morgan, "Slavery and Freedom: The American Paradox," Journal of American History 59 (1972): 6. For the role of blacks in the Revolution, see Benjamin Quarles, The Negro in the American Revolution (Chapel Hill, 1961), 111-57; George H. Moore, "Historical Notes on the Employment of Negroes in the American Army of the Revolution," Magazine of History, with Notes and Queries, No. 1 (1907); Richard B. Morris, The American Revolution Reconsidered (New York, 1967), 72-76. For the unfulfilled expectations of some black Tory refugees, see G. Halliburton, "The Nova Scotia Settlers of 1792," Sierra Leone Studies, N. S., No. 9 (December 1957): 16-25; Anthony Kirk-Greene, "David George: The Nova Scotia Experience," Sierra Leone Studies, N. S., No. 14 (December 1960): 93-120. [back to text]
10. Dirk Hoerder, "Boston Leaders and Boston Crowds, _PHONE_," in Young, ed., The American Revolution, 242, 248. For apprentices in the Philadelphia militia, see Eric Foner, Tom Paine and Revolutionary America (New York, 1976), 65, 126. [back to text]
11. See Richard B. Morris, Government and Labor in Early America (New York, 1946), 147-49, 314 n., 326, 362-63. The indentured servants were increasingly concentrated in rural areas, while towns like Boston were suffering a relatively rapid decline in immigration. James A. Henretta, "Economic Development and Social Structure in Colonial Boston," WMQ, 3d ser., 22 (1965): 83. In Philadelphia and its environs, the importation of German and Scotch-Irish redemptioners recommenced at the end of the Seven Years' War, while slave imports declined. Gary B. Nash, "Slaves and Slaveowners in Colonial Philadelphia," WMQ, 3d ser., 30 (1973), 223-56. Nevertheless, it is estimated that slaves and servants together of working age declined from 21 percent of Philadelphia's population (1767) to 16 percent (1775) and 5.5 percent (1783). [back to text]
13. Richard B. Morris, The American Revolution Reconsidered (New York, 1965), 60-65; Rowland Berthoff and John M. Murrin, "Feudalism, Communalism, and the Yeoman Freeholder: The American Revolution Considered as a Social Accident," in Stephen G. Kurtz and James H. Hutson, eds., Essays on the American Revolution (Chapel Hill | 1.613782 | HuggingFaceFW/fineweb-edu |
The user can install unlimited number of access tubes, in order with just one instrument to be able to measure and log the profile of soil moisture into unlimited spots. It measures and logs the soil moisture per 10 cm depth. It has big digital display where the measurements are shown in numbers and also in graphs.
The user at any time can recall the stored measurements and to see them to the instrument's display. The user can also transfer the measurements to a PC as simple ASCII file. Practically the instrument provides the ability to the user to create immediately the irrigation programming.
It doesn't require installation tubes and big holes to the ground. It operates based on TDR method (Time Domain Reflectometry / Reflections measuring in time). For that reason, it has memory for the measurements storing.
The user can works manually, while he/she is scanning and seeing the soil moisture from the specific measuring point at the instrument's display. It can operates also automatically as a data-logger. Except from the probes that measure in 5 different depths, the instrument can accepts probes with more or less sensors. Portable measuring system for profile
This profile system measures with accuracy the soil moisture to 4 or 6 different depths. The system measures using special tubes which are installed permanently into the ground.
In this case, many sensors can be connected to a data – logger in order to have continuous logging of the soil moisture. The system is able to give reliable measurements in any type of soil including soft soil and soil with very high salinity.
It can be installed for short or long time into the ground, but also it can be installed permanently. The sensor is factory precalibrated for almost all the types of crop soils and it can operates with several irrigation systems.
It can be connected to the portable instrument that displays the soil moisture measurements or to a digital data logger
It is suitable even for permanent installation into the ground. The measuring principle is the electromagnetic and doesn't require consumables or maintenance.
The measuring method is capacitive with transmission of high frequency electromagnetic pulses to the ground. The sensor measures from complete dry soil, up to complete saturated soil.
The system comes with an electronic device for the displaying of the measurements, which is controlled by an internal microprocessor and it has an easy to read digital display. The sensor is able to be connected to a data logger for continuous measurements on the field.
Electronic system for the measuring of profile soil moisture and soil conductivity, up to 16 different depths. The sensor is installed into a special tube. The tube is installed permanently into the soil. It is factory pre-calibrated in order to measure immediately and without any calibration in any type of soil. The length of the tube is 1 or 1.6 m. The measuring principle is the high frequency electromagnetic. The sensor includes a data logger. The downloading of the measurements can be done either with a portable PC or via GPRS (INTERNET).
The sensor accurately measures soil moisture, temperature and salinity. It is fully encapsulated, quick and easy to install and can be fully buried to reduce the risk of machine damage.The sensors are available in four lengths:
The sensors are installed in increments of 10 cm. You can get a sensor of any length of your choice: | 1.781037 | openbmb/Ultra-FineWeb |
Two wild-type field populations of root-knot nematodes (Mi-Vfield, Mj-TunC2field
), and two isolates selected for virulence in laboratory on resistant tomato cultivars (SM2V, SM11C2
), were used to induce a resistance reaction in tomato to the soil-borne parasites. Epigenetic and metabolic mechanisms of resistance were detected and compared with those occurring in partially or fully successful infections. The activated epigenetic mechanisms in plant resistance, as opposed to those activated in infected plants, were detected by analyzing the methylated status of total DNA, by ELISA methods, and the expression level of key genes involved in the methylation pathway, by qRT-PCR. DNA hypo-methylation and down-regulation of two methyl-transferase genes (CMT2
), characterized the only true resistant reaction obtained by inoculating the Mi-1.2-
carrying resistant tomato cv Rossol with the avirulent field population Mi-Vfield.
On the contrary, in the roots into which nematodes were allowed to develop and reproduce, total DNA was generally found to be hyper-methylated and methyl-transferase genes up-loaded. DNA hypo-methylation was considered to be the upstream mechanism that triggers the general gene over-expression observed in plant resistance. Gene silencing induced by nematodes may be obtained through DNA hyper-methylation and methyl-transferase gene activation. Plant resistance is also characterized by an inhibition of the anti-oxidant enzyme system and activation of the defense enzyme chitinase, as opposed to the activation of such a system and inhibition of the defense enzyme glucanase in roots infested by nematodes.
This is an open access article distributed under the Creative Commons Attribution License
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. | 1.8797 | m-a-p/FineFineWeb |
got out, the eating public went ballistic. Within days, the U.S.D.A. allowed schools to drop the product, and several supermarket chains stopped carrying it, shuttering several of the plants that produce it. Shortly after this episode, I received a panicky phone call from someone in the food industry, a buyer for one of the big food-service companies. After venting about the "irrationality" of the American consumer, he then demanded to know: "Who's going to be hit next? It could be any of us."
So it appears the loss of confidence is mutual: the food industry no longer trusts us, either, which is one reason a label on genetically modified food is so terrifying: we might react "irrationally" and decline to buy it. To win back this restive public, Big Food recently began a multimillion-dollar public-relations campaign, featuring public "food dialogues," aimed at restoring our faith in the production methods on which industrial agriculture depends, including pharmaceuticals used to keep animals healthy and speed their growth; pesticides and genetically modified seeds; and concentrated animal feeding operations. The industry has never liked to talk about these practices — which is to say, about how the food we eat is actually produced — but it apparently came to the conclusion that it is better off telling the story itself rather than letting its critics do it...
(10 October 2012) | 1.143225 | HuggingFaceFW/fineweb-edu |
National recovery plan for the Endangered Osborn's Eyebright (Euphrasia collina subsp. osbornii)
Department for Environment and Heritage, South Australia
- National recovery plan for the Endangered Osborn's Eyebright (Euphrasia collina subsp. osbornii) (PDF - 1,200 KB) | (RTF - 4,523 KB)
Euphrasia collina subsp. osbornii (Osborn's Eyebright) is a South Australian endemic plant. It is an erect, perennial, partly parasitic herb, 25 to 47 cm high (Barker, 1982), characterised by decussate leaves that are thick, fleshy and pale green. The leaves have 1-8, usually 3-6, pairs of blunt teeth along margins and are covered with glandular hairs (Barker, 1982). The flowers vary from white to pink or lavender and paler inside (Jessop and Toelken, 1986; Barker, 1982). The corolla is bilabiate with a tube, a hooded upper lip and a three lobed spreading lower lip (Jessop and Toelken, 1986). A yellow spot is sometimes found behind the lowest lobe (Barker, 1982). The lobes are pubescent over all but the tips (Jessop and Toelken, 1986). Euphrasia collina subsp. osbornii usually flowers from August to December, although collections have been made in March and June (Barker, 1982).
Euphrasia collina subsp. osbornii (W.R. Barker) is currently listed in South Australia as endangered under the National Parks and Wildlife Act 1972 (NPW, 1972) and as endangered at the national level under the Environment Protection and Biodiversity Conservation Act 1999 (EPBC Act). Euphrasia collina subsp. osbornii also meets the 2001 IUCN criteria EN B2ac (ii, iv), because its area of occupancy is less than 500km2, it has a projected continuing decline in area of occupancy and number of mature individuals, and experiences extreme fluctuations in the area of occupancy and number of mature individuals. The subspecies occurs in seven protected areas, five reserves and four heritage agreements within South Australia.
The overall objective of this Recovery Plan is to reduce the extinction risk of this subspecies so that it is downlisted from endangered to vulnerable. | 1.84708 | HuggingFaceFW/fineweb-edu |
for P2Y purinoceptor internalization. These data describe a novel function of ARF6 in the internalization of P2Y purinoceptors and demonstrate the integral importance of this small GTPase upon platelet ADP receptor function.
11. Platelet alpha-2 adrenergic receptor-mediated phosphoinositide responses in endogenous depression
International Nuclear Information System (INIS)
Mori, Hideki; Koyama, Tsukasa; Yamashita, Itaru
1991-01-01
We have previously indicated that epinephrine stimulates phosphoinositide (PI) hydrolysis by activating alpha-2 adrenergic receptors in human platelets. This method involves the measurement of the accumulation of [ 3 H]-inositol-1-phosphate (IP-1) as an index of Pl hydrolysis; lithium is added to inhibit the metabolism of IP-1, thus giving an enhanced signal. In the present study, we assessed the platelet alpha-2 adrenergic receptor-mediated PI responses in samples from 15 unmedicated patients with endogenous depression and 15 age- and sex-matched control subjects. The responses to epinephrine in the depressed patients were significantly higher than those of the controls, whereas the basal values did not differ significantly. These results support the hypothesis that platelet alpha-2 adrenergic receptors may be supersensitive in patients with endogenous depression
12. Regulation of platelet activating factor receptor coupled phosphoinositide-specific phospholipase C activity
International Nuclear Information System (INIS)
Morrison, W.J.
1988-01-01
The major objectives of this study were two-fold. The first was to establish whether binding of platelet activating factor (PAF) to its receptor was integral to the stimulation of polyphosphoinositide-specific phospholipase C (PLC) in rabbit platelets. The second was to determine regulatory features of this receptor-coupled mechanism. [ 3 H]PAF binding demonstrated two binding sites, a high affinity site with a inhibitory constant (Ki) of 2.65 nM and a low affinity site with a Ki of 0.80 μM. PAF receptor coupled activation of phosphoinositide-specific PLC was studied in platelets which were made refractory, by short term pretreatments, to either PAF or thrombin. Saponin-permeabilized rabbit platelets continue to regulate the mechanism(s) coupling PAF receptors to PLC stimulation. However, TRPγS and GDPβS, which affect guanine nucleotide regulatory protein functions, were unable to modulate the PLC activity to any appreciable extent as compared to PAF. The possible involvement of protein kinase C (PKC) activation in regulating PAF-stimulated PLC activity was studied in rabbit platelets pretreated with staurosporine followed by pretreatments with PAF or phorbol 12-myristate 13-acetate (PMA)
13. Mechanical circulatory support is associated with loss of platelet receptors glycoprotein Ibα and glycoprotein VI.
Science.gov (United States)
Lukito, P; Wong, A; Jing, J; Arthur, J F; Marasco, S F; Murphy, D A; Bergin, P J; Shaw, J A; Collecutt, M; Andrews, R K; Gardiner, E E; Davis, A K
2016-11-01
Essentials Relationship of acquired von Willebrand disease (VWD) and platelet dysfunction is explored. Patients with ventricular assist devices and on extracorporeal membrane oxygenation are investigated. Acquired VWD and platelet receptor shedding is demonstrated in the majority of patients. Loss of platelet adhesion receptors glycoprotein (GP) Ibα and GPVI may increase bleeding risk. Background Ventricular assist devices (VADs) and extracorporeal membrane oxygenation (ECMO) are associated with bleeding that is not fully explained by anticoagulant or antiplatelet use. Exposure of platelets to elevated shear in vitro leads to increased shedding. Objectives To investigate whether loss of platelet receptors occurs in vivo, and the relationship with acquired von Willebrand syndrome (AVWS). Methods Platelet counts, coagulation tests and von Willebrand factor (VWF) analyses were performed on samples from 21 continuous flow VAD (CF-VAD), 20 ECMO, 12 heart failure and seven aortic stenosis patients. Levels of platelet receptors were measured by flow cytometry or ELISA. Results The loss of high molecular weight VWF multimers was observed in 18 of 19 CF-VAD and 14 of 20 ECMO patients, consistent with AVWS. Platelet receptor shedding was demonstrated by elevated soluble glycoprotein (GP) VI levels in plasma and significantly reduced surface GPIbα and GPVI levels in CF-VAD and ECMO patients as compared with healthy donors. Platelet receptor levels were also significantly reduced in heart failure patients. Conclusions These data link AVWS and increased platelet receptor shedding in patients with CF-VADs or ECMO for the first time. Loss of the platelet surface receptors GPIbα and GPVI in heart failure, CF-VAD and ECMO patients may contribute to ablated platelet adhesion/activation, and limit thrombus formation under high/pathologic shear conditions. © 2016 International Society on Thrombosis and Haemostasis.
14. Response to platelet-activating factor in human platelets stored and aged in plasma. Decrease in aggregation, phosphoinositide turnover, and receptor affinity
International Nuclear Information System (INIS)
Shukla, S.D.; Morrison, W.J.; Klachko, D.M.
1989-01-01
Human platelet concentrates were stored in polyolefin bags at 22 to 24 degrees C on a horizontal shaker for up to 8 days. At different intervals, aliquots of platelet-rich plasma (PRP) were removed aseptically and five variables, i.e., platelet counts, morphology, platelet-activating factor (PAF)-stimulated aggregation, phosphoinositide turnover, and [3H]PAF binding to platelet receptors, were studied. The number of platelets did not change during the 8 days of storage. Scanning electron microscopy of the platelets revealed a gradual morphologic change from biconcave flat discs to irregular, crenated forms. The PAF-induced aggregation of platelets declined with time of storage. A decrease to 50 percent of the Day 1 aggregatory response to PAF was evident on Day 2, and there was a further decline to about 20 percent by Day 6. Similarly, PAF receptor-coupled phosphoinositide turnover, as monitored by 32P incorporation into individual phosphoinositides, decreased dramatically with storage. After 2 to 3 days of storage, the phosphoinositide turnover was reduced to 50 percent of the original response, and it continued to decline to about 25 percent of original response by Day 5 or 6. The binding of [3H]PAF to washed human platelets indicated subtle changes between Days 2 and 4, which became more noticeable by Day 6. These results have raised the possibility of changes in the number of the receptors and/or their affinity for the ligand during storage. We conclude that although the number of platelets was maintained during storage for 8 days, a general deterioration of their responses to PAF occurred at the levels of cell surface receptor, transmembrane signaling (phosphoinositide turnover), and response (aggregation)
15. Growth Arrest-Specific 6 (Gas6) and TAM Receptors in Mouse Platelets.
Science.gov (United States)
Uras, Fikriye; Küçük, Burhanettin; Bingöl Özakpınar, Özlem; Demir, Ahmet Muzaffer
2015-03-05
Growth arrest-specific 6 (Gas6) is a newly discovered vitamin K-dependent protein, which is a ligand for TAM receptors [Tyro3 (Sky), Axl, and Mer] from the tyrosine kinase family. Gas6 knockout mice were resistant to venous and arterial thrombosis. There are contradictory reports on the presence of Gas6 and its receptors in mouse platelets. The objective of this study was to investigate whether Gas6 and its receptors were present in mouse platelets or not. Specific pathogen-free BALB/c male and female mice of 8-10 weeks old and 25-30 g in weight were anesthetized under light ether anesthesia and blood samples were taken from their hearts. RNAs were isolated from isolated platelets, and then mRNAs encoding Gas6 and TAM receptors were detected by reverse transcription-polymerase chain reaction (RT-PCR). Protein concentrations of Gas6 and TAM receptors in platelets were measured by ELISA, but not those of Mer, because of the absence of any commercial ELISA kit for mouse specimens. RT-PCR results indicated the presence of mRNAs encoding Gas6 and Mer in mouse platelets. However, although RT-PCR reactions were performed at various temperatures and cycles, we could not detect the presence of mRNAs encoding Axl and Tyro3 (Sky). Receptor protein | 1.298231 | EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample |
Business and Different Financial Issues
Corporate managers have a professional responsibility to ensure the integrity and faithful representation of their company's financial statements. Outside auditors are responsible for expressing an independent opinion on financial statements to determine if they are presented fairly and in accordance with GAAP. These professional roles are the cornerstone of the U.S. financial system, which protect public interest and investor confidence. Over the past 50 years however, the pressure on corporate management to meet analysts' short-term earning projections, showing continued growth, has increased dramatically. The markets have punished companies harshly for missing earnings projections, even by a small amount. This pressure led to the management practices commonly referred to as "earnings management" and "earnings smoothing". These developments have had an adverse effect on the quality of earnings and financial statements. As SEC Chairman Arthur Levitt described, this process is a "game among market participants" (The Numbers Game, pg. 1). The integrity of financial statements fell in priority for corporate management. They were willing to take risks and push the accounting rules to their limits to achieve their desired financial results. Auditors also contributed to this problem by not standing up to their clients who were implementing such questionable accounting practices and misrepresenting their financial statements. The standard setters, regulators, and politicians were pressured and lobbied by industries not to impose stricter standards on corporations. As a consequence of these developments, there came a series of financial collapses and accounting scandals in the U.S., such as Enron, Sunbeam, Waste Management, and Arthur Andersen – and billions of investment dollars were lost.
One reason behind the "earnings management" mindset is that top management had personal financial incentives to increase the value of their companies' stocks. Managers' compensation packages included thousands, and in some cases millions, of stock options. In order to increase the value of these stock options, top management had to meet or exceed analysts' earnings expectations for their companies. As a result, personal greed and market pressures led management to adopt aggressive accounting practices and techniques to help them craft preferred short-term financial results. Accounting principles provide managers with the flexibility to use their judgment, which in turn creates an opportunity for misuse and abuse in the application of accounting standards. Corporate management employed five popular earnings management techniques: "big bath" restructuring charges, creative acquisition accounting, "cookie jar" reserves, materiality, and revenue recognition. For example, Sunbeam CEO, Mr. Dunlap, recorded an excessive restructuring write-off and created "cookie jar" reserves in 1996, the year he became CEO. This technique made the following year's results appear superior and created an illusion of a company turnaround. Moreover, Sunbeam also improperly recognized millions of dollars in sales revenue for paper transactions through the "channel stuffing" technique. Waste Management Company meanwhile, manipulated depreciation expense numbers to overstate their earnings.
How it works
This priority on short-term financial gains over long-term results led many corporate managers to create financial devastation, and even total collapse, for their companies. These managers misrepresented the real financial picture, misleading investors in the process. Eventually, investors lost confidence in capital markets, putting the entire financial system in jeopardy. The integrity, credibility, and transparency of financial statements and a focus on long-term financial goals should be a top priority for managers. The whole financial community is responsible for restoring financial stability and success in the United States. Auditors should maintain independence and uphold professional and ethical standards. Meanwhile, private standard setters must continue to improve accounting standards and regulators must reinforce standards to implement them. | 1.43106 | openbmb/Ultra-FineWeb |
The SP2 C28LSSOB hydraulic gear pump is a positive displacement pump known for its robust construction and reliability.
It is typically built using high-quality materials such as cast iron, aluminum, or steel to withstand the demanding conditions of industrial environments. These materials provide resistance to corrosion and wear, ensuring a long service life for the pump.
Flow Rate and Pressure:
One of the most critical specifications of this pump is its flow rate and pressure capabilities. The SP2 C28LSSOB is designed to deliver a specific flow rate of hydraulic fluid, which is measured in gallons per minute (GPM) or liters per minute (LPM). The actual flow rate may vary depending on the specific model and configuration of the pump. Matching the pump's flow rate to the system's requirements is essential for optimal performance.
Additionally, this pump can generate hydraulic pressure, which is measured in pounds per square inch (PSI) or bars. The maximum pressure capacity of the SP2 C28LSSOB hydraulic gear pump is a crucial consideration, as it must meet or exceed the system's pressure requirements.
The SP2 C28LSSOB pump offers various mounting options to accommodate different installation needs. Common mounting configurations include flange mount, foot mount, and SAE mount. The choice of mounting style depends on the spatial constraints and layout of the hydraulic system, ensuring ease of installation and efficient operation.
To operate, hydraulic gear pumps like the SP2 C28LSSOB require a power source, which is typically an electric motor or an internal combustion engine.
The pump's drive options refer to how it connects to the power source. This pump can be coupled directly to the motor or engine using a coupling or connected via a belt and pulley system. The selection of a drive option depends on the specific requirements and limitations of the hydraulic system.
Sealing and Shaft Options:
Sealing and shaft options are crucial aspects of hydraulic gear pumps, as they determine the pump's ability to prevent leaks and maintain efficiency. The SP2 C28LSSOB pump often features robust sealing systems, such as lip seals or mechanical seals, to prevent fluid leakage and contamination from entering the pump. Shaft options may include keyed or splined shafts, which interface with other components of the hydraulic system.
Temperature Range and Fluid Compatibility:
Another essential consideration is the pump's operating temperature range and fluid compatibility. The SP2 C28LSSOB pump is designed to operate within a specified temperature range, ensuring it functions optimally in various environmental conditions. It must also be compatible with the hydraulic fluid used in the system, whether it's hydraulic oil, synthetic fluid, or another type. Compatibility guarantees long-term reliability and prevents damage to the pump. | 1.18764 | m-a-p/FineFineWeb |
What are the top five things a person just charged with a DUI in Columbus, Ohio should do?
While a DUI arrest in Columbus, Ohio can be frightening, embarrassing and overall an overwhelming experience, there are several options and actions that someone in this scenario should immediately exercise to reach the most favorable outcome for this situation.
First and foremost, much depends on the specific laws of this particular area, whether this is a first offense for this person, and the events of the incident or arrest. As such, considering if there is valid information for the prosecution to move forward with a case against the charged individual is key. Collected items such as blood tests, breathalyzers and the like may be introduced; however, the state must prove the case beyond a reasonable doubt, so these may be called into question and potentially even dismissed as admissible evidence. Reviewing the laws, being familiar with the procedures, and consulting with an attorney as quickly as possible to verify these concerns can go a long way to a favorable outcome.
A second area to focus on is the police procedure if the charged individual has been arrested or stopped for DUI or suspicion of DUI. How the police act during the stop, how the reports are filled out and completed, and whether the information notated is accurate can make or break a DUI case when it is taken to court. If any of the information is incorrect or inaccurate, the case is easily then dismissed. Again, knowing this information up front is helpful, or seeking out help from a competent attorney who is familiar with these rules is key. It is also important not to admit to anything to the arresting or charging officers, but to speak only to one's attorney to avoid negative consequences.
Thirdly, an individual should review all possible facets and aspects of the arrest and case. Courts will columbus-Ohio-DUI-Attorneysgenerally ask a defendant to enter a plea, but entering a guilty plea can automatically lead to penalties including fines, jail time and locking ignition breathalyzers, just to name some examples, so checking the circumstances and confirming what could happen can help a charged person make the best choice for himself or herself and also provide peace of mind with knowledge of worst case consequences.The fourth point to consider is to know what is needed for effectively responding to a DUI charge. While hiring an attorney can be an expense, to be sure, it is oftentimes the wisest course for a solid outcome. Public Defenders or answering charges in the courtroom is not always the best consideration, and could result in more costs, especially if the defendant is found to be guilty or pleads no contest to the charges. Again, seeking out an attorney is usually the best plan in this instance.
If you are faced with a DUI charge, immediately contact Goldman & Rosenthal so that your case can be assessed. You can be assured that Goldman & Rosenthal will work towards obtaining the best possible outcome for you. | 1.164512 | Zyphra/Zyda-2 |
impose the lone-genius myth onto movie creation, attributing creativity to the director. But although the director has a unique creative position, unlike the painter, he or she cannot create a movie without a large support staff. The collaborative nature of movie production cant be explained with individualist approaches. (Sawyer, 2006, p. 197) Sawyers call to study filmmaking as collaborative creativity is complemented by Montuori and Pursers call to think of collaborative creativity as generative of voices and stories about creativity itself. creativity must not be viewed as purely self-assertive and self-expressive, but
17 it must, in fact, also fertilize the soil of creation for others, rather than being a cancerous ego expansion. It must do this by opening up possibilities, empowering others, and making them aware of their own creativity, in short by providing a context for it. The creation of a context for creativity does not rely merely upon the creation of a narrative style with which one may find a voice. Rather it creates the ground from which a plurality of narratives can emerge. (1995, p, 104, original emphasis) These callsfor studies of collaborative creativity in filmmaking that generate a plurality of narrativesare answered in this study. The stories told in the discourses of making-ofdocumentaries are narrative accounts of the collaborative creativity that is filmmakinggrounded in human communication that, indeed, creates the ground from which a plurality of narratives can emerge. Organizational creativity research, as outlined above, focuses primarily on two things: 1) on innovation change to the organization itself, or 2) on creativity injected into the workplace as an adjunct ingredient to enhance a companys main endeavor, such as selling copiers. Moreover, present studies of artistic group creativity mostly consist of live performances. This study centers on an artistic organization filmmaking where creativity is the sine qua non of its existence, a large group producing performances in fixed form. As such, this study fills the lacuna between organizational creativity and live artistic creativity research. This study moves beyond creativity research that still restricts itself to social science and biographical methods, conceptions of individuals as repositories of sole creative power or aboriginal genius, systems models that still posit solo creativity
18 within larger contexts or conditions, and organizational production models of input/output. This study is creativity research that begins with a different methoddramatism; that offers a different conception of individuals in groupsstorytellers; that nests creativity in a larger framecommunication; and that sees production not as input/output, but as language use that seeks to create spaces of cooperative interaction. A Dramatistic Analysis: History, Audience, and Kenneth Burke A communication perspective is necessary to the study of collaborative creativity because language not only serves to create the product during the creative event but also to structure, historicize, and dramatize the creative event in retrospect. The discourses of collaborative creativity are always storied in form, time, and symbols. That is, all that remains of the creative experience itself is the story. The product is, so to speak, the death of the creative process, but the process is inscribed and relived in the story. MODs, as exemplars of a plurality of creator narratives, call for attention to creative processes as many intertwining and interdependent stories. As stories of human purposes in mutual pursuit of common goals, collaborative creativity calls for the dramatistic method, based on the terminology and perspective of ritual drama rather than that of biology, machines, or computers. This section introduces a brief history of MODs and their rhetorical appeals for audiences. It then introduces the key terms of Kenneth Burke that will be utilized throughout this rhetorical analysis of MOD discourse.
19 A Brief History of Making-of Documentaries Making Motion Pictures: A Day in the Vitagraph Studios was the first Hollywood making of documentary, released in 1908 (Behlmer and Thomas). In 1912, the Edison Company released a fifteen-minute film entitled How Motion Pictures Are Made and Shown (p. 97). Studio-created featurettes abounded from the 1930s to the 1960s and intended to plug upcoming releases, introduce new stars, or show off technological innovations such as color (Arthur, 2004, p. 39). With the demise of the Old Hollywood studio system run by autocratic moguls like Jack Warner (Warner Brothers), Daryl Zanuck (20 th Century Fox), and Louis B. Mayer (MGM), studios were purchased by corporate entities having no knowledge whatsoever about filmmaking. Clueless studio heads, in the late 1960s and early 1970s, handed young, radical filmmakers the keys to the kingdom in hopes of capturing the youth market (Biskind, 1998). It was during this period of Hollywoods economic and artistic upheaval that MODs as we know them today were born. As the young generation of filmmakers, fresh out of University of Southern California and University of California at Los Angeles film schools, rebelled against the old studio system, they found an interest, perhaps a need, to document and publish the stories of how and why they made movies their way. While still overseen by modern day studios which have regrouped and now function as financing and distribution entities (Biskind, 1998) the modern MOD differs from the Old Hollywood behind-the-scenes featurettes by revealing much more of filmmaker information and attitudes. George Lucas, with 16mm camera in hand and later to be the subject of a few MODs himself, filmed the first New Hollywood MOD when he accompanied his friend
20 and mentor, Francis Ford Coppola, across the United States as the latter filmed The Rain People (Leva, 2004a). As industry consensus credits Lucas with artistic and technological innovations that have revolutionized how films are made, it is perhaps not insignificant that Lucass The Making of The Rain People (1969) spearheaded the modern MOD. While some MODs, such as The Making of a Legend: Gone With the Wind David Hintons 124-minute 1988 MOD, were broadcast on television and later released on videocassette, the packaging of MODs as crest jewels of special edition sets of DVD extras owes its origin to The Criterion Collection. This manufacturer and distributor of, according to their mission statement, important classic and contemporary films, started the practice of including a collection of supplements to the films director commentary, trailers, MODs, and additional documentaries and interviews. Criterion president Peter Becker refers to these as a film school in a box first on laser disc and then on DVD (Ulaby, 2004, June 12). Soon the special edition DVD, complete with supplements mirroring the Criterion Collection, began to proliferate the market and are now standard fare. The Audience Appeal of MODs Its a good bet that MODs are not beginning to flood the pop culture landscape exclusively as analytical bait for eager creativity researchers like me. Nor, I would venture to guess, are MODs eagerly viewed for their film school in a box opportunitydespite the low tuition. Only a few media studies (Arthur, 2004; Hight, 2005; Skopal, 2007) have focused on MODs, and no studies have addressed them outside of that discipline. Instead, I approach MODs as collaborative creations in their own right
21 whose rhetoric deserves an introduction here. MODs appeal to audiences for at least four reasons. First, they are great storytelling. MODs are filled with humorous, touching, thrilling, and inspiring anecdotes illustrated with skillfully edited film clips, production stills, and behind-the-scenes footage. For example, actor Alfred Molina, having been covered with real tarantulas while filming Raiders of the Lost Ark with director Steven Spielberg, recalls, These spiders, theyre running and theyre dropping and theyre fighting and theyre running over my face, and Stevens going Shoot! Shoot! Like this (snapping fingers). And hes going, Alfred, Alfred, look scared! Im going, Im scared! Im scared! (Bouzereau, 2003) Second, MODs offer fans a continuation of and privileged behind-the-scenes access to the story world of films they love. In his article, The Adventure Continues on DVD: Franchise Movies as Home Video, Pavel Skopal (2007) claims that special edition DVDs, including MODs and other supplements, are intended to construct an insider to the film industry. Two different registers of experience are offered at the same time: one consists of the extension of the experience of the diegetic world; the other involves a promise of emotional participation, mediation of collectivity, sharing the experience of the crew members (p. 190) On MODs, fans are invited backstage to listen to celebrities tell stories and to meet the people they work with everyday. The Lord of the Rings for example, lives on in over eighteen hours of documentary tours, extending and enhancing our stay in Middle
22 Earth, and inviting us to see the fantastical creatures created on paper, in clay, and on computer screens. Third, MODs offer specific, technical information sometimes sketchy, sometimes in-depth on how films are made, fulfilling the promise in the genres name. From Supermans first flight to the crushing of the Terminator, MODs include juicy secrets of movie making magic. Unlike magicians, however, MODs willingly reveal at least some of their secrets. Fourth, MODs follow the conventions of most documentary film, and savvy audiences understand these conventions and types. According to The American Film Institute Desk Reference, documentary films use real people to tell a nonfiction story with no performers except the real people who are interviewed or filmed going about their business (Corey & Ochoa, 2002, p. 148). And MODs take their place with other documentary sub-genres: travelogues, exposs, | 2.153399 | m-a-p/FineFineWeb |
New York City Housing Authority
At a Glance
Public Housing Authority
Project Types
Building Energy Efficiency, Clean and Renewable Energy, Data Analysis
New York, NY
Value icon
Net Present Value:
Savings icon
Annual kWh Savings:
143,000,000 kWh
Reductions icon
Annual CO2 Reductions:
45,000 metric tons
Rodney Dominique developed projects to improve energy efficiency and reduce operating costs for NYCHA.
At the New York City Housing Authority (NYCHA), Piper Kujac and Rodney Dominique were tasked with developing projects to improve energy efficiency and decrease operating costs.
They modeled the energy intensity of NYCHA housing developments, enabling NYCHA to compare buildings on benchmarks such as square footage, age and number of buildings.
In response to New York City's Local Law 87, which requires all public sector buildings to undergo an energy audit by a set date, Dominique and Kujac created a modified energy audit schedule for NYCHA's most energy-intensive developments.
They wrote a strategic plan to engage NYCHA residents in energy conservation, as well as an estimate of the savings the plan will generate.
They analyzed proposed retrofits to NYCHA's exterior grounds lighting and outdoor sidewalk canopies. Grounds lighting would be outfitted with auto-shutoff timers; canopy lighting would be replaced with more efficient equipment.
Finally, they developed power purchase agreements for five proposed fuel cell power generation and ten proposed combined heat and power on-site pilot projects.
Potential Impact
Together, Dominique and Kujac's energy efficiency projects could save the New York City Housing Authority over $25 million, as well as 143 million kilowatt hours of electricity and 45,000 metric tons of carbon dioxide annually.
Related Links | 1.394209 | Zyphra/Zyda-2 |
Email updates
Open Access Highly Accessed Method
Improved variant discovery through local re-alignment of short-read next-generation sequencing data using SRMA
Nils Homer123* and Stanley F Nelson2
Author Affiliations
1 Department of Computer Science, University of California - Los Angeles, Boelter Hall, Los Angeles, CA 90095, USA
2 Department of Human Genetics, David Geffen School of Medicine, University of California - Los Angeles, 695 Charles Young Drive South, Los Angeles, CA 90025, USA
3 Current address: Ion Torrent, Life Technologies, 7000 Shoreline Court, South San Francisco, CA 94080, USA
For all author emails, please log on.
Genome Biology 2010, 11:R99 doi:10.1186/gb-2010-11-10-r99
Received:26 June 2010
Revisions received:25 August 2010
Accepted:8 October 2010
Published:8 October 2010
© 2010 Homer and Nelson; licensee BioMed Central Ltd.
A primary component of next-generation sequencing analysis is to align short reads to a reference genome, with each read aligned independently. However, reads that observe the same non-reference DNA sequence are highly correlated and can be used to better model the true variation in the target genome. A novel short-read micro re-aligner, SRMA, that leverages this correlation to better resolve a consensus of the underlying DNA sequence of the targeted genome is described here.
Whole-genome human re-sequencing is now feasible using next generation sequencing technology. Technologies such as those produced by Illumina, Life, and Roche 454 produce millions to billions of short DNA sequences that can be used to reconstruct the diploid sequence of a human genome. Ideally, such data alone could be used to de novo assemble the genome in question [1-6]. However, the short read lengths (25 to 125 bases), the size and repetitive nature of the human genome (3.2 × 109 bases), as well as the modest error rates (approximately 1% per base) make such de novo assembly of mammalian genomes intractable. Instead, short-read sequence alignment algorithms have been developed to compare each short sequence to a reference genome [7-12]. Observing multiple reads that differ similarly from the reference sequence in their respective alignments identifies variants. These alignment algorithms have made it possible to accurately and efficiently catalogue many types of variation between human individuals and those causative for specific diseases.
Because alignment algorithms map each read independently to the reference genome, alignment artifacts could result, such that SNPs, insertions, and deletions are improperly placed relative to their true location. This leads to local alignment errors due to a combination of sequencing error, equivalent positions of the variant being equally likely, and adjacent variants or nearby errors driving misalignment of the local sequence. These local misalignments lead to false positive variant detection, especially at apparent heterozygous positions. For example, insertions and deletions towards the ends of reads are difficult to anchor and resolve without the use of multiple reads. In some cases, strict quality and filtering thresholds are used to overcome the false detection of variants, at the cost of reducing power [13]. Since each read represents an independent observation of only one of two possible haplotypes (assuming a diploid genome), multiple read observations could significantly reduce false-positive detection of variants. Algorithms to solve the multiple sequence alignment problems typically compare multiple sequences to one another in the final step of fragment assembly. These algorithms use graph-based approaches, including weighted sequence graphs [14,15] and partial order graphs [16,17]. Read re-alignment methods also have been developed [2,18] for finishing fragment assembly but have not been applied to the short reads produced by next generation sequencing technologies.
In this study, a new method to perform local re-alignment of short reads is described, called SRMA: the Short-Read Micro re-Aligner. Short-read sequence alignment to a reference genome and de novo assembly are two approaches to reconstruct individual human genomes. Our proposed method has the advantage of utilizing previously developed short-read mapping as the input, coupled with an assembly-inspired approach applied over discrete small windows of the genome whereby multiple reads are used to identify a local consensus sequence. The proposed method overcomes problems specific to alignment and genome-wide assembly, respectively, with the former treating reads independently and the latter requiring nearly error-free data. Unlike de novo assembly, SRMA only finds a novel sequence variant if at least one read in the initial alignment previously observed this variant. De novo assembly algorithms, such as ABySS and Velvet [1-3,5,6,19], could be applied to reads aligned to local regions of the genome to produce a local consensus sequence, which would need to be put in context to the reference sequence. This approach may still show low sensitivity due to the moderate error found in the data and has not been implemented in practice. For this reason, an important contribution of SRMA is to automate the return of alignments for each read relative to the reference.
SRMA uses the prior alignments from a standard sequence alignment algorithm to build a variant graph in defined local regions. The locally mapped reads in their original form are then re-aligned to this variant graph to produce new local alignments. This relies on the presence of at least one read that observes the correct variant, which is subsequently used to inform the alignments of the other overlapping reads. Observed variants are incorporated into a variant graph, which allows for alignments to be re-positioned using information provided by the multiple reads overlapping a given base. We demonstrate through human genomic DNA simulations and empirical data that SRMA improved sensitivity to correctly identify variants and to reduce false positive variant detection.
Results and discussion
Local re-alignment of simulated data
To assess the performance of local re-alignment on a dataset with a known diploid sequence, two whole genome human re-sequencing experiments were simulated (see Materials and methods) to generate 1 billion 50 base-paired end reads for a total of 100 Gb of genomic sequence representing a mean haploid coverage of 15 × for either Illumina or ABI SOLiD data. SNPs, small deletions, and small insertions were introduced to provide known variants and test improvements of SRMA for their discovery genome-wide, as described in the Materials and methods. The data were initially aligned with BWA (the Burrows Wheeler Alignment tool) [9] and then locally re-aligned with SRMA. For ABI SOLiD data, SRMA is able to utilize the original color sequence and qualities in their encoded form. However, BWA does not retain this information, so that only the decoded base sequence and base qualities produced by BWA were used by SRMA. The aligned reads were used for variant calling before and after local SRMA re-alignment by implementing the MAQ consensus model within SAMtools [10,20].
In Figure 1, we plot receiver operator characteristic (ROC) curves for the detection of the known SNPs, deletions, and insertions. For all types of variants, performing local re-alignment with SRMA greatly reduced the false-positive rate while maintaining the same level or increased sensitivity prior to SRMA. The false-positive reduction is more evident for indels, largely due to the ambiguity of placing indels relative to the reference sequence based on the initial gapped alignment. At this level of mean coverage, false discovery can be reduced to a rate of 10-6 for all variants while maintaining >80% power (sensitivity). We note that because inserted bases are directly observed, insertions are more powerfully corrected to the actual sequence relative to deletions. This may help explain the relatively greater improvement in the false positive rate for insertions over deletions at comparable sensitivities.
thumbnailFigure 1. Local re-alignment receiver operator characteristic curves for simulated human genome re-sequencing data. A synthetic diploid human genome with SNPs, deletions, and insertions was created from a reference human genome (hg18) as described in main text. One billion paired 50-mer reads for both base space and color space were simulated from this synthetic genome to assess the true positive and false positive rates of variant calling after re-sequencing. An increasing SNP quality filter was used to generate each curve. The simulated dataset was aligned with BWA (v.0.5.7-5) with the default parameters [9]. The alignments from BWA and SRMA were variant called using the MAQ consensus model implemented in SAMtools (v.0.1.17) using the default settings [10,20]. For the simulated datasets, the resulting variant calls were assessed for accuracy by comparing the called variants against the known introduced sites of variation. The BWA alignments were locally re-aligned with SRMA with variant inclusive settings (c = 2 and p = 0.1).
These simulations assumed ideal conditions: no genomic contamination, a simple error model with a modest uniform error rate, and a simplification that includes only a subset of all possible variants (SNPs, deletions, and insertions). Nevertheless, the false positive rates achieved after variant calling with no filtering criteria applied is striking and indicates that local re-alignment can be a powerful tool to improve variant calling from short read sequencing. Longer insertions (>5 bp) are not sufficiently examined in the simulation model. However, we note that longer indels are supported by SRMA, but SRMA requires that the initial global alignment permits the sensitive alignment of reads with longer indels to the approximate correct genomic position.
Local re-alignment of empirical data
To assess the performance of local re-alignment with SRMA on a real-world dataset, a previously published whole-genome human cancer cell line (U87 | 1.717987 | Zyphra/Zyda-2 |
A View from Emerging Technology from the arXiv
How To Track Vehicles Using Speed Data Alone
Computer scientists have developed an algorithm that works out a vehicle's destination using only its starting location and speed throughout its journey.
Location is a key indicator of personal travel patterns and habits. Numerous studies of location-based data sets show that they can be used to reveal huge amounts of information about people's routines, commutes, workplaces and other activities. Consequently, there is growing concern that location data must be treated with considerable care.
An increasing number of car insurance companies have begun to take note. One way these companies reduce the cost of insurance is by gathering data about their driving practices.
And to preserve the privacy of their customers, many insurance companies do not collect location data but only time-stamped driving speeds instead. The idea is that the speed and accelerations that occur when you drive give a good indication of your driving technique but without revealing your routes.
Today, Janne Lindqvist, Bernhard Firner and pals at Rutgers University in New Jersey say that this method may not be as privacy preserving as first thought. Indeed, these guys have created an algorithm that can predict the final location of a journey given only the starting point and the time-stamped driving speeds. "We show that with knowledge of the user's home location, as the insurance companies have, speed data is sufficient to discover driving routes and destinations when trip data is collected over a period of weeks," they say.
The problem of determining a route given only the speed of the car is a hard one to solve. Given some starting point, the number of possible routes increases dramatically the further the car travels. Certain patterns of speed changes can help trim the number of possibilities. For example, a car must come to a stop at certain junctions and can only turn left or right when its speed is below some threshold value.
By matching these patterns of speed changes to the topology of the road, it ought to be possible to determine the route the vehicle has taken.
In practice, this is a tricky business. A vehicle may stop at the junction but also because of numerous other reasons such as road works or other hold ups. A car has to slow down to make a left or right turn but may also slow down in the same way when the car in front turns instead.
Then there are uncertainties over the distance traveled. This varies according to driving techniques and the condition of the road, which might require a driver to steer around potholes, for example
The problem for any algorithm is the sheer number of possible routes that might be taken. The algorithm must compare different possible routes, evaluate them and choose the one that rates most highly. But this only works if the data is good enough to identify the route accurately. And therein lies the problem.
Given these uncertainties, it's easy to assume that the vehicle speed data by itself gives little if any indication of the route taken. However, Lindqvist, Firner and co prove otherwise.
These guys have developed an algorithm known that can recreate a vehicle's driving path given its time-stamped speed data and its starting location. Their approach is based on the idea that matching the speed data to a specific path requires the distance moved to be stretched or compressed. "For instance, if the speed data goes to 0 indicating a stop where there is no intersection we might pull the path forward by some distance to reach an intersection," they say.
When the algorithm does this, it "pins" the earlier route which cannot then be changed. That allows it to focus only on the routes that are possible after the intersection at which it was pinned. "We call this approach elastic pathing because of the stretching and compressing of the speed trace to fit the road is conceptually similar to stretching a piece of elastic along a path while pinning it into place at different points," they explain.
To test the algorithm, Lindqvist, Firner and co measured the speed characteristics of seven drivers travelling from their homes to 46 unique destinations over 240 journeys. At the same time, they also measured the location of the cars using a GPS device to give ground truth data.
The results are revealing. Lindqvist, Firner and co say they were able to predict the final destination to within 500 metres for 20 percent of the journeys. "This means that a location visited daily can be identified in about a week; locations visited on a weekly basis could be identified with slightly more than a month of data," they say.
Even when it is not possible to identify the destination, the algorithm nevertheless rules out a large portion of possible destinations. What's more, it also identifies variations in daily routines without needing to know anything about the final endpoint.
That's somewhat different from the claim that speed data gives no information about a vehicle's route or destination, as some insurance companies have claimed.
Location data can be used to gain all kinds of insights into a person's behaviour, social activities and work activities. Lindqvist, Firner and co suggest that an interested party could get answers to questions such as: Did you go to an anti-war rally on Tuesday?", "Did you see an AIDS counselor?", "Have you been checking into a motel at lunchtimes?", "Why was your secretary with you?", or "Which church do you attend? Which mosque? Which gay bars?"
Lindqvist, Firner and co point out that even if insurance companies do not use speed data in this way now, there is no guarantee that they won't use it like that in future, or that some other organisation might not mine the data in this way in future. Indeed, part of the problem is that speed data is not considered private and so may be made available in ways that private data can never be.
These guys end by saying that there are various alternatives to speed data that give a good indication of driving habits but offer a much better privacy protection. For example, some insurance companies simply gather mileage data or minutes of use.
Something to think about next time you opt for a usage-based insurance policy.
Ref: _URL_.0052: Elastic Pathing: Your Speed is Enough to Track You
AI and robotics are changing the future of work. Learn from the humans leading the way at EmTech Next 2019.Register now | 1.73091 | openbmb/Ultra-FineWeb |
or punishment. To make this more imaginable and understandable to our soldiers -- and I use that in a joint context -- we have included in the field manual specific prohibitions.
There's eight of them.
Interrogators may not force a detainee to be naked, perform sexual acts of pose in a sexual manner. They cannot use hoods or place sacks over a detainee's head or use duct tape over his eyes. They cannot beat or electrically shock or burn him or inflict other forms of physical pain, any form of physical pain. They may not use water boarding. They may not use hypothermia or treatment which will lead to heat injury. They will not perform mock executions. They may not deprive detainees of the necessary food, water and medical care. And they may not use dogs in any aspect of interrogations.
The interrogation approach techniques in this field manual have undergone favorable interagency legal review and been judged to be consistent with the requirements of law, Detainee Treatment Act and the Geneva Conventions, as well as policy.
KIMMONS: The field manual was reviewed and endorsed by senior DOD figures at the secretarial level, by the Joint Staff, by each of the combatant commanders and their legal advisers, by each of the service secretaries and service chiefs and their legal advisers, in addition to the director of the Defense Intelligence Agency and the director of national intelligence, who coordinated it laterally with the CIA.
It has also been favorably reviewed by the Department of Justice.
The field manual contains 19 interrogation approaches. No other techniques are authorized within the Department of Defense. Sixteen of these are traditional interrogation approaches which were enshrined in the old Field Manual, 34-52.
Based on battlefield lessons learned, we have added two additional approaches to the main body of the field manual, and those are Mutt and Jeff; good cop, bad cop; and false flag -- portraying yourself as someone other than an American interrogator.
Those were added for general purpose use across all detainees categories. Those 18 techniques are authorized for use Department of Defense-wide and worldwide, regardless of status.
Our four star combatant commanders also specifically requested, based on battlefield experience, that we include one restricted technique, called separation, for use on a by exception basis only with unlawful combatants. That is, it is not authorized for use on prisoners of war and other protected persons.
Separation allows interrogators to keep unlawful enemy combatants apart from each other as a normal part of the interrogation process so they can't coordinate their stories and so that we can compare answers questions which interrogators have posed to each other without there having been collusion.
It is for the same reason that police keep murder suspects separated while they are questioning them, although this is within an interrogation context.
Separation meets the standard for humane treatment, the single standard that exists across DOD and it is enshrined in this manual. But the Geneva Conventions afford additional protections, privileges, if you will, to lawful combatants above and beyond the humane standard.
KIMMONS: It authorizes lawful combatants to receive mail, and send packages. It authorizes them to receive pay for work that they perform. It also protects them from being separated from their fellow prisoners of war with whom they were captured without their expressed consent.
These additional privileges, above and beyond the humane standard, are not an entitlement which our unlawful combatants enjoy. And you can imagine for practical reasons why we would want to keep unlawful combatants and include terrorists separated from one another, albeit within a humane environment.
Nonetheless, special interrogator training and certification is required for our interrogators to use this restricted approach. A very high level of command oversight is also required. Four star combatant commanders must approve the use within their respective theaters of operation. A second general officer or flag officer must review and approve each interrogation plan which incorporates the use of separation. And typically a number of techniques will be included in any given interrogation plan.
We've built mandatory safeguards for interrogation into all of the interrogation approach techniques in the field manual to ensure humane application. The field manual also includes many examples of correct usage of these techniques. It tries to leave as little to the imagination as possible without being overly prescriptive, and we think we've done a good job.
The field manual clarifies military intelligence and military police's roles, which are complementary but discrete in some important respects. Military police do not participate in interrogation, they do not set conditions, they do not soften up our detainees. That is explicitly written into the field manual and will be trained.
The field manual also defines the roles and functions which health care providers may perform within the context of interrogation, which is very limited and essentially limited to normal precautionary medical inspection and care, as well as emergency services.
KIMMONS: But they are not authorized to assist in -- directly assist interrogators.
The field manual reiterates a standard established by Department of Defense Director 3115-09 for strict control of access to detainee by non-DOD personnel, other government agencies or other foreign governments. And basically it requires a joint task force commander or a feeder commander to approve that access. And if access is granted, the non-DOD agency must be escorted and observed by a trained, certified DOD member.
And, also, the non-DOD agency must agree to comply with the safeguards, provisions, and use the techniques and only the techniques enshrined in this field manual.
The field manual makes clear that commanders of forces which conduct detention operations or interrogation operations are directly accountable and responsible for humane detainee treatment, in addition to their other command responsibilities. It emphasizes the responsibility of every service member to report observed, suspected or alleged detainee abuse, and it tells them how to do it. It also gives them guidance on how to report if they suspect their chain of command is complicit.
The bottom line is, this is a very good -- this is a very good field manual. Our soldiers, sailors, airmen and Marines need it to get this tough work done. And we need to put it into their hands without further delay.
QUESTION: Sir, are you concerned, as an intelligence officer, that specifying exactly the 19 techniques that can be used and not having anything else classified will hinder your troops' ability to gather the intelligence that they need?
KIMMONS: That's a good question, and it's one that we, frankly, wrestled with for several months.
KIMMONS: We weighed that against the needs for transparency and working openly with our coalition partners who don't have access to all of our classified publications and also the need to be as clear as we can be in the training of these techniques to our own soldiers, sailors, airmen and marines, to reduce the risks of inadvertent migration from a classified domain into an unclassified text by virtue of them being separated.
And so on balance, in consultation with our combatant commanders, we decided to go this route. We're very comfortable with it. So are our combatant commanders.
Now, having said that, I'd just add, this manual is going to be revised or at least reviewed for revision on an annual basis. That's no different than any other doctrinal publication. Based on battlefield lessons learned, new policy which may come out, we may revise this downstream. And so I think we have flexibility to make adjustments as required.
QUESTION: General, why was the decision made to keep these categories, these separate categories, of detainees? You have traditional prisoners of war and then the unlawful enemy combatants. Why not treat all detainees under U.S. military custody the exact same way?
KIMMONS: Well, actually the distinction is in Geneva, due to the Geneva Convention, which describes the criteria that lawful combatants such as enemy prisoners of war, which attributes they possess -- wearing a uniform, fighting for a government, bearing your arms openly and so on and so forth. It is all spelled out fairly precisely inside Geneva.
Geneva also makes clear that traditional unlawful combatants, such as -- 50 years ago, we would have talked about spies and saboteurs, but also now applies to this new category or new type of unlawful combatants, terrorists, Al Qaida and Taliban. They clearly don't meet the criteria for prisoner of war status, lawful combatant status. And so they are not entitled, therefore, to the extra protections and privileges which Geneva affords.
STIMSON: And let me jump in, too. It's important to remember that for the first time in DOD history, here we are establishing for all detainees, regardless of their legal status, a baseline standard of care and treatment. And those are the standards announced in and shown in enclosure 3 and 4. So Common Article 3 plus the additional protections articulated in enclosure 4.
So with respect to how they're treated at a minimum, there is no difference. But people earn their rights into certain categories in the Geneva Conventions. And as the general said, an enemy prisoner of war is a person who abides by, among other things, the laws of war, fights for a country, open arms, wears a uniform, et cetera. But you have to differentiate between legal status and then standard of care and treatment.
QUESTION: Mr. Stimson, does the directive change the policy on detention operations or merely define it more clearly?
STIMSON: The directive lays out the overarching policy guidance to combatant commanders and the Department of Defense. It clarifies the older 1994 policy. The 1994 policy was written mainly with enemy prisoner of war in mind, not this category of nonstate actors with global lethality; here, unlawful enemy combatants. And so it incorporates those lessons learned. It is applicable to today, moving | 1.215397 | Zyphra/Zyda-2 |
The LTF's mission is to explore and push the frontiers in time and frequency research, optical metrology, and ultrafast science and technology.
LTF also contributes Switzerland to join in a near future the limited number of countries that actively participate to the definition of the international atomic time TAI with primary frequency standards, with the development of the unique atomic fountain clock FOCS-2 that operates with a continuous beam of cold cesium atoms.
LTF's key competences to achieve its research objectives are:
- Ultrafast lasers development and analysis
- Various frequency combs systems
- State-of-the-art ion beam sputtering (IBS) machine for custom optics fabrication
- Cold atoms
- Noise/stability analysis for microwave/optical oscillators
- Stabilisation of microwave/optical oscillators
- Vapour cells manufacturing and characterisation
- CPT and double resonance spectroscopy in alkali vapour cells
- Vapour cells atomic clocks
- Time & Frequency metrology
- State-of-the-art reference H-maser | 1.208333 | m-a-p/FineFineWeb |
Welcome to the Inner Archeology Podcast! During today's conversation, we discuss the life-changing power of raising your baselines, through creating new norms in your day-to-day life. We both believe that your life is made by the tiny choices you...
Welcome to the Inner Archeology Podcast! During today's conversation, we discuss the life-changing power of raising your baselines, through creating new norms in your day-to-day life. We both believe that your life is made by the tiny choices you make every day, compounding the changes you make into one exponential curve towards the life you want. It's so important to show up even when shit is hard! Sometimes things will feel off, and we believe that's the time to get specific about what the main problems are, to empower yourself to make the necessary changes. It's okay to focus on one thing at a time, and we explore how to do that during today's conversation. Some of these concepts came to us through reading The Slight Edge, including the fact that most people make an effort to change until the discomfort has been alleviated, and then fall back into old ways of being. We discuss course correcting, another concept from the book, and talk about dividing up goals into bite-sized pieces to avoid getting overwhelmed. Join us for a motivating conversation full of tools you can implement immediately to begin to enact real change in your life. Thank you for tuning in!
Key Points From This Episode:
- An introduction to today's topic: raising your baselines.
- What is meant by raising your baselines and creating new norms in your day-to-day.
- How, when you make these changes to your autopilot, you forget what it was like before.
- Why your life is made by the tiny choices you make every
- Why the idea of the Quantum Leap can be really harmful.
- The Compound Effect: making changes to your life to create an exponential curve.
- Why it is so important to show up even when shit is hard.
- Why it is important to distill a general feeling that something is off into the main problems.
- Why the urge to change everything is a sure sign that you need to get specific about what to change.
- Focusing on one thing at a time, and committing to having a new standard for yourself.
- How the Slight Edge describes how most people make an effort and then stop.
- Drawing solutions to the problem into your conscious mind and separating problems from self-loathing.
- Course correcting and focusing on solving problems instead of beating yourself up.
- Why it is so helpful to divide up your goals into bite-sized pieces.
- How judgments often come from other people's values and not our own.
- Why, at the end of the day, you just have to figure out what works for you.
- How guilt and self-loathing are strongly based on what we believe makes a good human.
- Why tough love is not always the answer when it comes to motivating yourself.
- Why things get easier when you start to raise your baselines.
- Operating from a place of defeat and not separating yourself from the emotional piece of making a change.
- The two hardest things that Sarah has ever done: getting divorced and quitting drinking.
- The gift that comes out of the Dark Night of the Soul: faith in yourself.
- Why self-trust is the number one thing Sarah wants to cultivate in her children.
- An affirmation: my problems are not unique, everyone has survived and thrived what I'm going through, the answers are out there, I just have to be open and willing to receive them.
Links Mentioned in Today's Episode:
Inner Archaeology on Patreon
Atomic Habits on Amazon
The Slight Edge on Amazon
Inner Archeology Email
Sarah Turner on Instagram
Emily Pennystone on Instagram | 1.230797 | openbmb/Ultra-FineWeb |
Year: 1991 Source: American Journal of Psychiatry, v.148, no.6, (June 1991), p.775-779 SIEC No: _PHONE_
Compared the proportion of blood relatives who had had psychiatric hospitalizations, had committed suicide, or had self-reported mental illness to the proportion of spouses with the same manifestations. The proportion of blood relatives significantly exceeded that of spouses. Since heterozygous carriers of the Wolfram syndrome gene are 50-fold more common among the blood relatives than spouses, this is evidence that heterozygous carriers are predisposed to significant psychiatric illness. | 1.544294 | EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample |
Will teeth return to their old situaitions after orthodontic treatment?
Teeth that improve after orthodontic treatment their locations may be corrupted and return to their old positions if retantion treatment is not made. It does not mean that your treatment is bad. It's just a precaution so that it does not get back to its original state. When your orthodontic treatment is over, you can finish your treatment with a nice tooth structure but after the wires and braces are removed from your teeth, your teeth will try to return to their old places after a while. The reinforcement apparatus is engaged here to get used to the new positions of your gums, bones and muscles and to keep the teeth in place during the time for bones to harden. If these appliances are not used, the teeth will return to their old places and your orthodontic treatment is unsuccessful and incomplete.
>
Do ask yourself whether you teeth will be okay after braces?
What is retantion treatment?
Providing permanent position of your teeth is a part of the orthodontic treatment. After the treatment is finished, "retantion treatment" is performed to save the position of your teeth in the hardening process of the bones. Retainers are used to maintain and stabilize the tooth positions obtained after orthodontic treatment. Your orthodontist will decide what is the most appropriate appliance for you. Retantion is one of the important stages of orthodontic treatment. There are two types of retainers. These are They are fixed and removable movable appliances.
Stable Retainers
They are not seen because they are adhered to the backs of the teeth. They do not disturb the tongue and do not affect speech. It is the most commonly used method. There is no risk of breaking and disappearing because it is glued to the teeth. They can only be removed by orthodontics.
Movable Retainers
Movable retainers can be easily attached and detached and are usually made of transparent plates. It's easy to clean. It will be easy to lose because it can be removed from the mouth. For this reason, when they are removed from teeth, they should be carried in special bag. It is important that reinforcement is initiated at the stage of diagnosis and treatment planning and matured during treatment.
What are the reasons for the return of teeth to their old stiuations after orthodontic treatment?
– If the patient do not use retainers when his orthodontics give it to him.
– When the orthodontic treatment is over, 20 year old teeth pushing other teeth.
-Continuation of patient's squeezing or gnashing habit after orthodontic treatment is over
– Excess muscle movements in the chin, lips and cheeks.
– Changes that occur in the lower cutter teeth in the natural process
admin
Bir cevap yazın
E-posta hesabınız yayımlanmayacak. Gerekli alanlar * ile işaretlenmişlerdir
Next Post
How long do I have to wear my braces? How is it determined?
Paz Haz 3 , 2018
Teeth that improve after orthodontic treatment their locations may be corrupted and return to their old positions if retantion treatment is not made. It does not mean that your treatment is bad. It's just a precaution so that it does not get back to its original state. When your orthodontic […] | 1.4792 | EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample |
In line with the announcement of the start of Ramadan, our offices opening hours will remain the same. However, please note that reduced government office opening times may have an impact on order timings. Please get in touch for more information.
Attesting Russian Documents For The UAE
The attestation of Russian Documents can be quite a complicated process and varies depending on the type of document you wish to attest. The general process for all documents is detailed below.
How It Works
- Send us your documents or have them collected by our specially allocated couriers.
- We'll have them sent safely to Moscow, Russia, where four things will happen:
1. The document will be sent to the Ministry of Justice where a stamp of authentication will be applied.
2. Then, your document will be sent to the Ministry of Foreign Affairs, where another stamp will be applied.
3. Thirdly, your document will be sent to the UAE Embassy in Moscow, where a Certificate of Legalisation will be added to the front page.
4. Fourth, your document will be translated from Russian into Arabic, and subsequently notarised for acceptance in the UAE.
- Finally, your document will be returned to the UAE and attested at the Ministry of Foreign Affairs (MOFA), before being sent back to you (or anywhere else you might need), safe and sound.
This process is the same across the board, though there can be some differences depending on the type of document you're trying to attest. For example, in the case of a Degree Diploma, it is often necessary to send the original copy of the completed degree to us before it can be attested. If it isn't possible for you to get the original copy though, it isn't a problem. We can help you legalise the copy of the Diploma, though there will be an extra charge.
In terms of the translation, costs will vary depending on the number of characters needing to be translated on your document. Please contact us for more information.
Request a Quote
Click the "Request a Quote" button above, or click here to be taken to our quote page. We aim to get back to you as soon as possible.
Why Vital Certificates?
As you can see, attesting a document can be quite complicated! Not to worry. Vital Certificates can handle the attestation process from end-to-end, regardless of the type of document you're trying to attest, which means you can sit back, relax and let us do the work.
We've helped thousands of clients have their documents attested, and haven't once had a document rejected in the UAE. Using our UAE attestation service gives you complete peace of mind and ensures that you get what you require in a timescale to suit your needs. Our staff are in constant contact with our legalisation partners in Russia, and as such are fully aware of the steps required to make sure your attestation goes smoothly.
Client Feedback
"This morning I received my attested documents back from you. They arrived by courier.
Firstly, I was very cautious about sending both my Russian wedding certificate and son's Kazakh birth certificate to you for UAE attestation services. The absolute difficulties and expense that could occur from trying to obtain replacement certificates should they have been lost is just simply incalculable. Anybody who has lived within these former CIS states will absolutely understand the challenges faced by anyone trying to obtain copies of original certification, let alone someone living outside these nation states. My fears were unfounded. You delivered prompt, efficient service and from start to finish the process took 29 days. This includes the courier time from when I sent the documents to your office in UAE from Thailand to me receiving the attested documents back in my hands.
Thank you again for your prompt, efficient service."
Jonathan Lettington - Dubai, UAE | 1.050865 | Zyphra/Zyda-2 |
RLC, 389, 398, 410 alternating-current, 394–418 direct-current, 204–207 equivalent, 206 resonant, 388–394 circulation, 90 Clausius–Mossotti relation, 502 CO (carbon monoxide) molecule, dipole moment of, 483 coefficients of capacitance, 148 of potential, 148
Index
coil cylindrical (solenoid), magnetic field of, 300–303, 338 toroidal energy stored in, 369 inductance of, 364 Cole, R. H., 505 comets, 454 compass needle, 239 complex exponential solutions, 402–405 complex-number representation of alternating current, 406–408 complex numbers, review of, 828–829 conduction, electrical, 181–204 ionic, 189–195 in metals, 198–200 in semiconductors, 200–204 conduction band, 201–202 conductivity, electrical, 182–188 anisotropic, 182 of metals, 198–200 units for, 182 of various materials, 188, 195–197 conductors, electrical, 125–141 charged, system of, 128 properties of, 129 spherical, field around, 131 conformal mapping, 151 conservation of electric charge, 4–5, 180–181 distinguished from charge invariance, 242 conservative forces, 12 continuity equation, 181 copper, resistivity of, 188, 196–197 copper chloride, paramagnetism of, 526 corona discharge, 37 coulomb (SI unit of charge), 8, 762 relation to esu, 9 Coulomb, Charles de, 10 Coulomb's law, 7–11, 259 tests of, 10–11 Crandall, R. E., 11 Crawford, F. S., 378 critical damping, 394 Crosignani, B., 590 cross product (vector product) of two vectors, 238
Curie, Pierre, 566 Curie point, 566 curl, 90–99, 798–799 in Cartesian coordinates, 93–95, 100 physical meaning of, 95 curlmeter, 96 current density J, 177–180 current loop magnetic dipole moment of, 534 magnetic field of, 531–535 torque on, 547 current ring, magnetic field of, 299 current sheet, 303–306 magnetic field of, 303–304 currents alternating, 394–418 bound and free, 559–560 bound-charge, 505–507 displacement, 433–436 electric, see electric currents fluctuations of, random, 195 curvilinear coordinates, 791–801 cylinder, magnetized, compared with cylinder polarized, 557 cylindrical coordinates, 792 damped harmonic oscillator, 389 damped sinusoidal oscillation, 392 damping of resonant circuit, 388–394 critical, 394 Davis, L., Jr., 11 decay of proton, 6 decay time for earth's magnetic field, 386 deer, flying, 102 "del" notation, 83, 95, 100 detergent, 510 deuterium molecule, 242 Di Porto, P., 590 diamagnetic substances, 526 diamagnetism, 527, 540, 546 of electron orbits, 545 diamond crystal structure of, 200 wide band gap of, 203 dielectric constant κ, 468 of various substances, 469 dielectric sphere in uniform field, 495–496 dielectrics, 467–471
Index
diode, 219 silicon junction, 229 vacuum, 181 dipole comparison of electric and magnetic, 535–536 electric, see electric dipole magnetic, see magnetic dipole dipole moment electric, see electric dipole moment magnetic, see magnetic dipole moment disk conducting, field of, 140 charged, 68–72 displacement, electric, D, 499, 560–561 displacement current, 433–436 distribution of electric charge, 20–22 divergence, 78–79, 795–797 in Cartesian coordinates, 81–83, 100 divergence theorem, 79–80, 100 domains, magnetic, 567 doorbell, 321 doping of silicon, 203–204 dot product of two vectors, 12 dynamic random access memory (DRAM), 153 dynamo, 379, 386 dyne (Gaussian unit of force), 8 0 , permittivity of free space, 8 Earnshaw's theorem, 87 earth's magnetic field, 280, 577 decay time of, 386 possible source of, 380 eddy-current braking, 370 Edison, Thomas, 419 Einstein, Albert, 2, 236, 281, 314 electret, 558 electric charge, 1–11, 242 additivity of, 10, 13 conservation of, 4–5, 180–181 distribution of, 20–22 free and bound, 497–498, 506–507 fundamental quantum of, 8 invariance of, 241–243 quantization of, 5–7, 242 sign of, 4 electric currents, 177–189 and charge conservation, 180–181
energy dissipation in flow of, 207–208 parallel, force between, 283 variable in capacitors and resistors, 215–216 in inductors and resistors, 366–367 electric dipole potential and field of, 73–77, 474–476 torque and force on, in external field, 477–478 electric dipole moment, 74, 473, 475 induced, 479–482 permanent, 482–483 electric displacement D, 499, 560–561 electric eels, 219 electric field definition of, 17 in different reference frames, 243–246 of dipole, 75, 476 of Earth, 36 energy stored in, 33 of flat sheet of charge, 29 flux of, 22–26 Gauss's law, 23–26 inside hollow conductor, 134 of line charge, 28 line integral of, 59–61 macroscopic, 488–489 in matter, spatial average of, 487 microscopic, 488 of point charge with constant velocity, 247–251 relation to φ and ρ, 89 transformation of, 245, 310 units of, 17 visualization of, 18–20 electric field lines, 18, 19, 71, 72, 76–77 electric generator, 370 electric guitar, 370 electric potential, see potential, electric electric quadrupole moment, 74, 473 electric susceptibility χe , 490, 501, 503 electrical breakdown, 36, 100 electrical conduction, see conduction, electrical electrical conductivity, see conductivity, electrical electrical conductors, see conductors, electrical electrical insulators, 125–126
835
electrical potential energy, 13–16 of a system of charges, 33, 63 electrical shielding, 135 electrodynamic tether, 369 electromagnet, 320 design of, 584 electromagnetic field components, transformation of, 310 electromagnetic force, range of, 11 electromagnetic induction, 343–357 electromagnetic wave, 254, 438–453 in dielectric, 507–509 in different reference frames, 452–453 energy transport by, 446–452 general properties of, 440–441 reflection of, 445, 447, 521 standing, 442–446 traveling pulse, 441 electromotive force, 209–211, 347, 357 alternating, 395 electron, 3, 5, 6, 198–204, 540–549 charge of, 8 magnetic moment of, 547 valence, 200 electron motion, wave aspect of, 199 electron orbit, 540–545 diamagnetism of, 545 magnetic moment of, 540–541 electron paramagnetic resonance (EPR), 823 electron radius, classical, 52, 545 electron spin, 546–549 angular momentum of, 546–547 electronic paper, 37 electrostatic field, 61, see also electric field equilibrium in, 88 electrostatic unit (esu) of charge, 8, 765 energy, see also potential energy, electrical in alternating-current circuit, 415–418 dissipation of, in resistor, 207–208 electrical, of ionic crystal, 14–16 stored in capacitor, 150 in electric field, 33 in inductor, 368 in magnetic field, 369 of system of charges, 11–14 energy gap, 201 equilibrium of charged particle, 88
836
equipotential surfaces, 71, 131 in field of conducting disk, 140 in field of dipole, 76 in field of uniformly charged disk, 72 equivalence of inertial frames, 237, 805 equivalent circuit, 206 for voltaic cell, 211 esu (electrostatic unit), 8, 765 Faller, J. E., 11 farad (unit of capacitance), 142 Faraday, Michael, 2, 236, 314 discovery of induction by, 343–345 reconstruction of experiment by, 384 Waterloo Bridge experiment by, 380 Faraday's law of induction, 356–357 ferrofluid, 572 ferromagnetic substances, 526 ferromagnetism, 527, 565–568 Feynman, R. P., 37, 539 field electric, see electric field magnetic, see magnetic field meaning of, 245 Fisher, L. H., 348 fluctuations of current, random, 195 flux of electric field, definition of, 22–26 magnetic, 348–351 flux tube, 349, 351 force components, Lorentz transformation of, 810–811 application of, 255– | 1.024872 | EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample |
In John 15:25, it is written
Where in the Old Testament (in the exact phrasing) was Jesus quoting from? Scholars and theologians usually say that it is Psalm 69:4, where it is written
Yet while I can see the resemblance, I don't exactly think it matches up with what Jesus said. Since this isn't an exact matching translation, can we then assume that there must have existed a manuscript of Old Testament literature (most likely the Psalms) that had that exact phrasing? Or should we stick with Psalm 69:4? Or perhaps there are some other verses in the Psalms or elsewhere?
- 2
The Septuagint translation gives Those who hate me without a cause are more than the hairs of my head. for Psalm 69:4, – Nigel J Nov 28 '18 at 1:18
- As fleshly incarnation of the Word itself Jesus perfectly took upon himself the teaching role of Rabbi. He had authority to rephrase because He was the original author; see the "You have heard it said...but I say..." format of Matthew chapter 5. – Mike Borden Feb 19 '20 at 13:24
NWT Psalm 35:19 "Do not let those who for no reason are my enemies gloat over me; Do not let those hating me without cause wink their eyes maliciously."
This do?
When the NT "quotes" the OT, two things must be remembered:
1. The version of the OT that we use (in English) is a translation of the Hebrew OT. However, when the NT quotes the OT, it most often uses the LXX which is regularly slightly different from the Hebrew. (Some argue that some of these passages are older than the Masoretic text and are to be preferred but that is quite debateable.)
2. When the NT quotes, even the LXX it very rarely verbatim but most often a slight paraphrase. Therefore, "exact wording" is rarely found. This is true here.
UBS5 lists two verses (Ps 35:19, 69:4) that may have been amalgamated/merged to create the phrase on John 15:25 - just 3 words in the Greek. The phrase in John 15:25 does not occur "exactly" in the OT as far as I could find. However, it appears to be a verbal allusion or parallel to the texts listed.
Jesus said in John 15:25 "But this is to fulfill what is written in their Law:"'They hated me without reason.'" and many translations that say the same thing, kind of "resemble"to some and still do not see Psalm 35:19 - NIV "hate me without reason" ASV "hating me without cause" KJV "neither let them wink with the eye that hate me without a cause" For me this resembles Mark 4:12 | 1.306834 | Zyphra/Zyda-2 |
Table of Contents
- The A to Z of Pokémon Collection: Everything You Need to Know
- The Basics of Pokémon Collection
- Strategies for Building an Impressive Pokémon Collection
- The Joy of Completing the Pokédex
- Case Study: The Pokémon Collection Journey of Ash Ketchum
- Key Takeaways for Pokémon Trainers
- 1. How many Pokémon are there in total?
- 2. Can I complete the Pokédex without trading with other trainers?
- 3. Are there any Pokémon that cannot evolve?
- 4. How can I participate in Pokémon events?
- 5. Can I transfer Pokémon from older Pokémon games to newer ones?
- 6. Are there any Pokémon that are exclusive to certain regions?
- 7. Can I collect Pokémon in real life?
Pokémon, the beloved franchise that took the world by storm in the late 1990s, has captured the hearts of millions of fans across the globe. With its captivating characters, engaging gameplay, and a vast world to explore, Pokémon has become a cultural phenomenon that continues to thrive to this day. One of the most exciting aspects of the Pokémon franchise is the collection of Pokémon, which spans from A to Z. In this article, we will delve into the world of Pokémon collection, providing valuable insights and tips for both new and seasoned trainers.
The Basics of Pokémon Collection
Before we dive into the details, let's start with the basics. Pokémon collection refers to the act of capturing, training, and evolving Pokémon creatures. Trainers embark on a journey to catch as many different Pokémon species as possible, filling up their Pokédex, a digital encyclopedia that records information about each Pokémon.
Each Pokémon has its own unique characteristics, abilities, and types. There are currently 898 Pokémon species, each with its own name and number. The Pokémon collection journey begins with the starter Pokémon, which players choose at the beginning of their adventure. From there, trainers encounter wild Pokémon in various locations, such as forests, caves, and cities, and attempt to capture them using Poké Balls.
Strategies for Building an Impressive Pokémon Collection
Building an impressive Pokémon collection requires careful planning and strategic thinking. Here are some tips to help you on your journey:
- Research Pokémon Locations: Different Pokémon species can be found in specific locations. Researching the habitats and regions where certain Pokémon appear will increase your chances of encountering them.
- Trade Pokémon: Trading Pokémon with other trainers is a great way to expand your collection. Some Pokémon can only be obtained through trading, so connecting with other trainers is essential.
- Participate in Events: Pokémon games often feature special events where rare Pokémon are available for a limited time. Keep an eye out for these events and make sure to participate to add unique Pokémon to your collection.
- Breed Pokémon: Breeding Pokémon allows you to obtain Pokémon with specific traits or moves. By strategically breeding Pokémon, you can create a diverse and powerful collection.
- Evolve Pokémon: Many Pokémon have multiple evolutionary stages. By leveling up or using special items, you can evolve your Pokémon into more powerful forms. Evolving Pokémon not only strengthens your collection but also unlocks new abilities and moves.
The Joy of Completing the Pokédex
Completing the Pokédex, the ultimate goal for many trainers, is a monumental achievement. The Pokédex serves as a comprehensive record of all Pokémon species, providing valuable information about their characteristics, abilities, and evolution paths. It is a testament to a trainer's dedication and perseverance.
Completing the Pokédex requires capturing every Pokémon species, including both basic forms and their evolutions. This task can be challenging, as some Pokémon are incredibly rare or can only be obtained through specific methods, such as trading or participating in special events. However, the sense of accomplishment and satisfaction that comes with completing the Pokédex is unparalleled.
Case Study: The Pokémon Collection Journey of Ash Ketchum
Ash Ketchum, the iconic protagonist of the Pokémon animated series, provides an inspiring case study of a Pokémon collection journey. Throughout his adventures, Ash has captured and trained numerous Pokémon, showcasing the diversity and depth of the Pokémon world.
Starting with his loyal Pikachu, Ash has caught a wide range of Pokémon, each with its own unique abilities and personalities. From the fiery Charizard to the water-loving Squirtle, Ash's collection represents a diverse array of Pokémon types and species.
Ash's journey also highlights the importance of friendship and bonding with Pokémon. He forms deep connections with his Pokémon, treating them as valued companions rather than mere tools for battle. This emotional bond is a crucial aspect of the Pokémon collection journey, as it fosters trust and strengthens the trainer-Pokémon relationship.
Key Takeaways for Pokémon Trainers
As you embark on your own Pokémon collection journey, keep these key takeaways in mind:
- Research and Plan: Understanding the locations, evolution paths, and unique traits of Pokémon will greatly enhance your collection efforts.
- Connect with Other Trainers: Trading Pokémon and participating in events will help you obtain rare and exclusive Pokémon.
- Embrace the Journey: Building a Pokémon collection is not just about completing the Pokédex. It's about the experiences, friendships, and memories you create along the way.
- Train and Evolve: Leveling up and evolving your Pokémon will unlock their full potential and make your collection more formidable.
- Enjoy the Adventure: Pokémon collection is a lifelong journey. Embrace the joy of exploration, discovery, and the thrill of encountering new Pokémon.
1. How many Pokémon are there in total?
Currently, there are 898 Pokémon species in total.
2. Can I complete the Pokédex without trading with other trainers?
No, some Pokémon can only be obtained through trading with other trainers. Trading is an essential aspect of completing the Pokédex.
3. Are there any Pokémon that cannot evolve?
Yes, there are several Pokémon species that do not have any evolutionary stages. These Pokémon are known as "legendary" or "mythical" Pokémon.
4. How can I participate in Pokémon events?
Pokémon events are often announced through official Pokémon websites, social media channels, and in-game notifications. Stay updated and follow the instructions provided to participate in these events.
5. Can I transfer Pokémon from older Pokémon games to newer ones?
Yes, in most cases, you can transfer Pokémon from older games to newer ones using specific methods and tools provided by the game developers. However, not all Pokémon can be transferred, so it's important to check compatibility before attempting transfers.
6. Are there any Pokémon that are exclusive to certain regions?
Yes, certain Pokémon species are exclusive to specific regions in the Pokémon world. This encourages trainers to explore different locations and connect with trainers from other regions to complete their collections.
7. Can I collect Pokémon in real life?
While Pokémon collection primarily takes place in the virtual world of Pokémon games, there are real-life Pokémon collectibles | 1.452081 | m-a-p/FineFineWeb |
High Functioning Autism.Reed, Vicki
Abstract: This paper reviews the characteristics and needs of students with high functioning autism. First, it lists 18 common characteristics of autism, then it stresses that autism is defined by the general pattern of characteristics. Next, it discusses how people with high functioning autism differ from those with autism. These differences include higher cognitive abilities, more normal language functioning, better social functioning, a tendency toward specialization, and a generally better prognosis as a functioning adult. Discussion of the diagnostic process notes the negative connotations of the term "autism," and the frequent use of the terms "Pervasive Developmental Disorder" or "Asperger Syndrome," instead, for this high functioning group. Other diagnostic concerns include the need for observation in natural settings, overlap of symptoms with other disorders, the importance of early diagnosis, and a lack of knowledge about autism by many professional psychologists. A section on behavior management of autistic children stresses their need for routine and structure, management of transitions, their tendency to learn best by doing, ways to substitute more suitable behaviors for undesirable ones, and the need to avoid overstimulation. Specific ways to manage misbehavior are also suggested, such as ignoring the behavior, positive reinforcement, physical prompting, and unemotional discipline. (Contains 14 references.) (DB)
Title: High Functioning Autism.
Author: Reed, Vicki
Note: 12p.; Paper presented at the Annual School Social Work Association of America Conference (1st, Louisville, KY, September 26-27, 1996).
Publication Year: 1996
Document Type: Non-classroom Material (055); Review Literature (070)
Target Audience: Policymakers
ERIC Identifier: ED408765
Clearinghouse Identifier: EC305636
Descriptors: * Autism; Clinical Diagnosis; Definitions; Developmental Disabilities; * Disability Identification; Early Identification; Elementary Secondary Education; * Mild Disabilities; * Severity [of Disability]; * Student Characteristics
Identifiers: *Aspergers Syndrome; *Pervasive Developmental Disorders | 2.254522 | HuggingFaceFW/fineweb-edu |
NGC 2899: double-lobed planetary nebula shines in VLT image
A two-pronged planetary nebula glows against a backdrop of stars in new VLT image.
This is planetary nebula NGC 2899, as seen in a new image captured using the Very Large Telescope. It's the sharpest, most detailed image of the nebula ever produced.
NGC 2899 is about 3,000 - 5,000 lightyears away in the constellation of Vela.
Planetary nebulae have nothing to do with planets, as their name might suggest, but are so-called because they are dying stars that expel layers of gas and material into space, usually producing a spherical, planetary shape.
In the case of NGC 2899, however, 2 stars are thought to be involved, creating a 2-pronged butterfly shape instead.
It's thought that after 1 of the 2 stars reached the end of its life and began casting off its outer layers, the other star started interfering with the flow of gas, producing the 2-lobe shape.
Astronomers captured this image using the FORS instrument on the VLT, revealing clarity and sharpness never seen in other images of the target. Even the fainter edges of the nebula glow brighter than the background stars.
- Find out more about in our beginner's guide to nebulae.
Observatory Very Large Telescope
Release date 30 July 2020
Image credit ESO
Iain Todd is BBC Sky at Night Magazine's Staff Writer. He fell in love with the night sky when he caught his first glimpse of Orion, aged 10. | 1.943904 | m-a-p/FineFineWeb |
Getting To Know Software Defined RadioPosted by On
Getting To Know Software Defined Radio
Are you interested in staying ahead of the curve when it comes to interacting with the latest in electronic technologies? Are you the early adopter amongst your friends looking to gain experience with the new and unique as soon as possible? If so, then it might be time to get involved with Software Defined Radio and its related enhancements. While new developments and advancements are discovered each day, SDR promises to offer several benefits to countless industries. Below is a list of frequently asked questions for those who have yet to experiment with Software Defined Radio:
What is the Purpose of Software Defined Radio?
The main intention of SDR development is to do away with all of the complex analog parts located within a traditional radio system. Instead, they are replaced with software that can accomplish the same task.
What is SDR Capable of in terms of Modulation?
It can perform the modulation – and demodulation – of all potential modes: NFM, WFM, AM, SSB, USB, LSB, CW, etc. You can also work satellites without much difficulty, allowing for easy access to images from weather satellites.
Getting To Know Software Defined Radio
How does SDR interact with the RF Spectrum?
Instantly increase your ability to perceive a larger portion of the RF spectrum, allowing you to notice otherwise unnoticeable differences in older technology. You can also calculate various RF measurements, such as signal strength, interference patterns, etc.
What do I need to begin using SDR?
Getting started with your own Software Defined Radio is simple, requiring only the minimum in hardware to begin. Once you have an antenna, a device to convert signals from the antenna to the computer, a computer, soundcard, and the proper SDR model, you're more than ready.
Is there only one kind of Software?
Of course not! Just as there are several types of traditional radio devices, the improvement of electronic design has allowed many different varieties of SDR software to be developed. Find the software that best matches your needs and utilize it to fulfill your industrial requirements.
Is it difficult to learn how to use SDR?
No harder than learning any other radio based system you've used previously. Regardless of the software, most systems utilize a very similar GUI to streamline user experience and allow for quick comprehension. An optimized control panel exists in most systems, allowing for frequency adjusting, mode changing, filter changing, audio level manipulation, and other features necessary to complete your specific task.
Sharing is caring!
ElectronicsSoftware
electronic designElectronicsmodulationradiorf spectrumsdrSoftwaresoftware defined radio
Comments are disabled.
shares | 1.886958 | EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample |
Chichén Itzá, Mexico The Mystery of the Decline of the Maya /
This site is one of the most impressive testimonies to the Mayan-Toltec civilization of the Yucatán (10th to 15th centuries). It contains some of the most outstanding examples of Central American architecture, combining Mayan construction techniques and Toltec sculpted decoration.
|Other Authors:||Infobase,, Global Screen.|
[Place of publication not identified] :
1 online resource (1 video file (12 min., 42 sec)) : sound, color.
Treasures of the World. North America and the Caribbean Sea. | 1.038482 | m-a-p/FineFineWeb |
Common Core Catholic Identity Initiative
A national working group has begun the Common Core Catholic Identity Initiative (CCCII) to develop and disseminate resources and guidelines to assist Catholic elementary and secondary schools in integrating elements of Catholic identity (Catholic values, Scripture, Church social teachings, encyclicals, etc.) into curriculum and instruction based on the Common Core State Standards.
The initial phase of CCCII focuses on K-8 English/Language Arts/ Literacy. Resources for other subjects and for 9-12 curriculum will be developed in later phases.
Forty-six states have agreed to adopt the Common Core State Standards, a set of high quality K-12 learning standards that includes rigorous content and application of knowledge using higher-order thinking skills, leading students to college and career readiness. Currently, Catholic schools are assessing what the implications of the standards and accompanying assessments may be for them.
While Catholic schools have their own local or diocesan standards, their ability to continue to provide high-quality education for their students is compelling them to consider adoption of the common core standards. Catholic schools will be impacted as curriculum resources and professional development opportunities become aligned with Common Core State Standards by producers of instructional materials, college teacher preparation programs, or regulations for participation in the federal programs that currently benefit their students and teachers. Within this environment, maintaining the uniqueness and integrity of the Catholic school will require integrating the demands of their mission and the academic expectations of their constituents and the wider education community.
To assist Catholic schools with enhancing Catholic identity integrated into the curriculum, the Common Core Catholic Identity Initiative (CCCII) has been launched as a collaborative project involving Catholic universities, corporations and sponsors invested in Catholic education, and the National Catholic Educational Association (NCEA).
The Common Core Catholic Identity Initiative has two goals:
- to empower Catholic schools and dioceses to design and direct the implementation of the Common Core standards within the culture and context of a Catholic school curriculum
- to infuse the Common Core standards with the faith/principles/values/social justice themes inherent in the mission and Catholic identity of the school.
The CCCII project aims to accomplish its goals by creating a process and a product:
Phase 1: Gather approximately 35 practitioners and curriculum and catechetics experts to pilot a CCCII ELA Unit development process to be shared with the larger Catholic educational community. (June 2012)
Phase 2: Revise and refine the unit development process so that it can be replicated in dioceses around the country.
Phase 3: Invite participation in development of additional CCCII ELA Units by Catholic educators around the country.
Phase 1: Utilize the expertise and strength of experienced and innovative teachers to develop complete units/exemplars that join Catholic identify with the Common Core curriculum standards. Utilize the expertise of CCCII leaders to develop supporting resources and guidelines. (June 2012)
Phase 2: Post exemplar units, guidelines, and resources developed in for the June 2012 launch for open access by Catholic educators on the Catholic School Standards Project Website _URL_). (July 2012)
Phase 3: Expand exemplar units and Catholic Identity resources available for use by local Catholic schools.
Tailor the CCCII Unit development process for Catholic secondary schools.
Expand CCCII to include additional subject areas.
Meet the CCCII Leadership and Planning Teams | 2.17714 | HuggingFaceFW/fineweb-edu |
Enjoying EarthSky? Subscribe.
159,586 subscribers and counting ...
By in
| | Space on Aug 25, 2014
Back to the future with Neptune's fascinating moon Triton
On the anniversary of Voyager 2's encounter with Neptune and Triton … an awesome collection of restored Voyager 2 images, plus the link between Triton and Pluto.
August 25, 1989. On this date, the incredibly durable and successful spacecraft Voyager 2 made the closest approach to Neptune and the large moon Triton. This has special significance with a current mission, New Horizons to Pluto.
Let me explain the Triton / New Horizons connection. Triton is the only large planet sized moon to orbit the parent planet in a backwards or retrograde direction, and also in a high inclination (that is, Triton's orbit is significantly inclined to the plane of the solar system). Triton orbits Neptune once every 5 days, 21 hours and 3 minutes at a mean distance from Neptune of 354,800 kilometers / 220,331 miles. Triton has a diameter of 2,708 kilometers / 1,682 miles and a mean global density of 2.061 grams per cubic centimeter.
Prior to Voyager 2, it was thought that Triton was much larger, more like the size of Mercury, or the Jupiter moons Ganymede and Callisto or the Saturn moon Titan. However during Voyager 2's approach, Triton appeared smaller and much more reflective than expected, approximately 90%. Also the Triton was very faint in the infrared part of the spectrum, meaning that Triton is extremely cold with an average surface global temperature of minus 237 Celsius / minus 395 Fahrenheit. That's even taking into account the Neptune system's distance from the sun of 4.515 billion kilometers / 2.804 billion miles (30.1 times the sun-to-Earth distance), where sunlight is only 1/906th that at Earth. Voyager 2 imaged 40% of Triton in detail, with a further 10% at lesser resolutions.
Triton certainly did not form around Neptune. Remember how people used to speculate that Pluto was an escaped moon of Neptune? The truth appears to be that Triton and Pluto are indeed related, but not in that way. Keep reading.
Voyager 2 mosaic of Triton.
Voyager 2 mosaic of Triton.
On Triton, Ruach Planitia, a frozen lake. Image via Voyager 2.
On Triton, Ruach Planitia, a frozen lake. Image via Voyager 2.
On Triton, cantaloupe terrain with faulting. Image via Voyager 2.
On Triton, cantaloupe terrain with faulting. Image via Voyager 2.
Dark patches on Triton. Image via Voyager 2
Dark patches on Triton. Image via Voyager 2
So Triton did not form around Neptune; that is a given fact. Captured moons are not unusual, both Mars moons, Phobos and Deimos are likely captured, as is the Saturn moon Phoebe (also in a retrograde orbit around Saturn and is the second largest such moon in our solar system), and many of the outer moons of Jupiter, Saturn, Uranus and Neptune. But those moons are relatively tiny. What stands Triton apart is its sheer size and mass. Triton is the 7th largest moon in the solar system (our own moon is 5th largest, and Jupiter's Ganymede is the largest) and is more massive than all of the other moons in our solar system that are smaller than itself put together. Triton is approximately 70% rock, and 30% ice.
It appears that Triton was a former dwarf planet / KBO (Kuiper Belt Object) much like present day Pluto and Eris (similar in size and mass to Triton) the largest known KBOs / dwarf planets (see where I am going with this) that was captured into orbit around Neptune.
The initial orbit would have been very elliptical, as forward momentum was robbed from Triton by Neptune's powerful gravity. Energy is never destroyed (proved by Albert Einstein in the famous equation E=MC2), so the forward energy was dissipated in the form of heat. The globe of Triton was stretched and squeezed, and Triton's innards melted with frictional heat. Its surface melted, too. In fact, the original surface was completely destroyed and renewed, with slushy ices erupting and doing the resurfacing. We see similar resurfacing today with frictional heating under varying degrees with the Jupiter moons Io and Europa and the Saturn moon Enceladus (Io by far the most with huge volcanoes and tectonic activity).
When Triton's orbit finally settled down, the surface refroze in the intense cold. Water ice on Triton today is as hard as solid rock at these temperatures, also carbon monoxide, carbon dioxide, ammonia, methane and nitrogen also exist as ice and were detected on Triton's surface.
At the time of the Voyager 2 pass, Triton was near an extreme solstice, with most of its southern hemisphere in the sunlight of Triton's midsummer. Its nitrogen ice cap was slowly sublimating and reforming on the northern mid-winter side. The tenuous Triton atmosphere (only 10 times denser than the exosphere through which the International Space Station orbits above Earth) displayed hazes and thin clouds. There are also geysers of nitrogen ice, where super-cooled liquid nitrogen was bursting through.
Now it appears that the capture of Triton occurred in geologically recent times, perhaps only 500 million years ago, judging by the low number of impact craters and fresh appearance of frozen icy lakes, Ruach Planitia being a good example, etc. Some parts of Triton's surface could be as young as 10 million years.
The link with the upcoming Pluto encounter with New Horizons next July? Pluto is only a little smaller than Triton, 2,368 kilometers / 1,470 miles wide, similar densities, similar surface temperature, slightly warmer due to a slightly darker surface colouring at minus 229 Celsius / minus 380 Fahrenheit, similar rock to ice ratio (70% to 30%). Pluto appears to be like Triton before Triton was captured by Neptune, so the appearance will certainly be very different, more cratered for sure.
The limb (edge) of Triton in color via Voyager 2.
The limb (edge) of Triton in color via Voyager 2.
Overlays of Triton's southern hemisphere, via Voyager 2.
Overlays of Triton's southern hemisphere, via Voyager 2.
Neptune and Triton, via Voyager 2.
Neptune and Triton, via Voyager 2.
Bottom line: On the anniversary of Voyager 2's encounter with Neptune and Triton … an awesome collection of restored Voyager 2 images, plus the link between Triton and Pluto. | 1.634758 | EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample |
STH-10 - Sales Tax Holiday - Qualifying Computers
A computer is defined as a central processing unit (CPU), along with various other components including monitor, keyboard, mouse, cables to connect components, and preloaded software. While the CPU may be purchased separately, other items must be part of a bundled computer package in order to be eligible.
Items excluded from the holiday are individual computer parts, such as monitors, keyboards, speakers, and scanners when not sold in conjunction with a CPU; individually purchased software or other software not part of a preloaded software package on the initial purchase of a computer; storage media, such as diskettes and compact disks; handheld electronic schedulers; personal digital assistants (PDAs); e-readers, video game consoles; and computer printers and supplies for printers, such as paper and ink.
Not finding answers? Submit a request
Powered by Zendesk | 1.000411 | Zyphra/Zyda-2 |
demonstrates that the observed phenomenon is not likely strain specific and may occur in multiple brain regions. These results suggest that Fabp7 may play a role in developmental processes involved with plasticity in the maternal LS and actively mediate emotional changes associated with the postpartum period.
Cytokine signaling in the CNS influences how stem cells respond to hormones and plays an important role in the differentiation of neural progenitor cells into either glia or neurons [49]. A family of genes called "suppressors of cytokine signaling" (Socs) is known to be a negative regulator of such pathways [50]. Socs2 is the most abundant of these proteins in the CNS [51], and it is thought to mediate a negative feedback loop on messaging pathways downstream of growth hormone binding in the central nervous system. Stem cells cultured from Socs2 knockout mice produced 50% fewer neurons when induced to differentiate, and generated more astrocytic glial cells [52]. Conversely, Socs2 overexpressing stem cells yielded a higher than normal neuron to astrocyte ratio after differentiation. Socs2 was dramatically upregulated in maternal LS compared to both virgin and pups removed (46% and 53%; Fig. 3). The strength of developmental relevance in these microarray data provides some support for the idea that maternity can be viewed as another stage in the mammalian life cycle characterized by terminal differentiation of the CNS. If indeed these gene changes reflect developmental activity in the maternal LS, further anatomical and histological studies may be able to provide more direct evidence by visualizing structural changes involved in shaping the maternal brain.
Additional Enrichment Findings in Maternal LS
Functional annotation clustering revealed a small cluster of genes influencing the synthesis of cyclic nucleotides (Fig. 4), with most members exhibiting downregulation in the maternal LS compared to virgin. Cyclic nucleotides play a central role in a variety of intracellular signaling pathways as second messengers [53], [54] and can modulate neuronal excitability via the binding of cyclic nucleotide gated (CNG) channels. CNG channels are well known for their role in sensory transduction in retinal and olfactory cells, but are also expressed widely in the mammalian CNS and are likely involved with synaptic plasticity and development [55], [56]. The enrichment of this gene cluster indicates that, in addition to altering ion channel activity directly through expression of the channels themselves, there may be an additional level of regulation facilitated by fluctuating levels of cyclic nucleotides available to cells in the maternal LS.
There was a large degree of enrichment in a nucleosomal gene cluster primarily composed of histone genes. These genes were almost exclusively downregulated, but it is not clear what implications this has on the function of the maternal LS. It is possible that a downregulation of histone mRNA may reflect changes in post-transcriptional processes that influence the stability of histone mRNA transcripts [57], [58]. The visualized cluster in Figure 4 shows a strong interaction between chromobox homolog 1 (Cbx1, also known as heterochromatin protein 1 β) and the H3 histone. Cbx1 plays a major role in regulating higher order chromatin structure and gene transcription [59]. Additionally, Cbx1−/− knockout mice exhibit a lethal phenotype characterized by aberrant neocortical development with reduced proliferation of neuronal precursors, demonstrating developmental relevance [60]. While the significance of the nucleosomal gene cluster is not clear, the robustness and consistency of its enrichment is profound, and may influence chromatin remodeling in LS in the establishment of the maternal brain.
The Ras related gene cluster includes members of the Ras family, which is involved with many different cellular processes. It has been widely linked to tumor formation and has been shown to contribute in conjunction with thymoma viral proto-oncogene (Akt) signaling to glioblastoma formation in the brain [61]. All three members of the Akt family exhibited indications of altered expression in maternal LS compared to virgin in our microarray results (Table S1). In addition to cancer, certain Ras members of this enriched gene cluster also influence exocytosis and vesicle trafficking [62], [63].
Expression Changes of Anxiety Related Genes in Maternal LS
A number of genes, such as the GABAA receptors and other neuronal signaling genes have been linked to anxiety. Glutathione reductase (Gsr) is also linked to anxiety, and qPCR confirmed a 12% decrease in Gsr in the maternal LS compared to virgin (Fig. 3). Lentiviral in vivo overexpression of Gsr in the cingulate cortex of C57BL/6J mice results in significant increases in anxiety-related behavior [64]. The same study also showed that, across several strains of mice, Gsr activity was highest in the most anxious strains and lowest in the least anxious strains, suggesting relevance of Gsr activity to normal variation in anxiety. The transition from virgin to lactating maternal states involves a natural change in anxiety, in which postpartum mice respond less to general stressors [65]. The Gsr mRNA reduction we measured in maternal mice is in agreement with these observations, but whether these changes in LS are causally linked to altered anxiety is not known.
Sensory Input Contributions to Gene Expression in Maternal LS
The present study revealed that a subset of genes was strongly influenced by the presence of pups. The maternal state can involve both short term and long term changes. Studies indicate that maternal characteristics are more strongly and stably expressed with increasing numbers of pregnancies and that this occurs with long lasting gene expression changes [66][68]. Among the significant 809 gene expression changes between maternal and virgin LS, 69 were found to also be significantly different (FDR-adjusted p<0.25) between maternal and pups removed LS (Table 2). For these genes, the removal of pups most fully restored virgin-like levels of mRNA even after the experience of mating, pregnancy, and parturition. We interpret these genes as being more dependent on the continued presence of pups, in contrast to genes that were differentially expressed between maternal and virgin LS but not between maternal and pups removed LS. This list represents an interesting subset of genes that are more environmentally malleable to the social aspects of maternity. While formal pathway analysis cannot be reliably conducted on such a small set of genes, it can be seen from the basic categorization in Table 2 that they span numerous functional groupings. Fabp7 and Socs2 are notable members of this subset, which suggests that the developmental processes with which they are involved may be ongoing and driven in the postpartum period by social interaction.
Materials and Methods
Animals
Outbred hsd:ICR female mice (Mus domesticus) (Harlan, Madison WI) were used for all experiments. Nulliparous animals were split into three age-matched groups (∼70 days of age at time of dissection), designated as lactating maternal, pups removed, and virgin. For mating, females in the lactating maternal and pups removed groups were housed in polypropylene cages with a breeder male for 2 weeks. Virgin females were concurrently co-housed with one another to provide similar levels of exposure to social stimuli. After the separation of breeder males, all females (pregnant and virgin) were housed individually and provided precut nesting material until dissections. Under this schedule, all females experienced similar levels of co-housing and single housing to minimize potential effects of isolation-induced stress. Cages were changed once per week until pups were born (postpartum Day 0), after which cages were not changed again for all animals until dissection. On Day 0, pups were culled, if necessary, to standardize litter size to eleven. For females in the pups removed group, pups were removed from the cage on postpartum Day 2. The pups removed group was included in the experimental design to provide insight as to whether or not continued sensory input from pups is required, in addition to parturition, to generate expression changes characteristic of the maternal phenotype. All animals were housed in the same room with cages of each experimental group positioned in an alternating fashion on the same shelves. A 12∶12 light/dark cycle with lights on at 06∶00 h CST was used. Female mice were provided with ad lib access to breeder chow (Harlan) and tap water. Procedures were performed in strict accordance with the guidelines of the National Institutes of Health Guide for the Care and Use of Laboratory Animals, and all studies were approved by the University of Wisconsin Animal Care and Use Committee.
Tissue Collection and RNA Extraction
On postpartum Day 7, brains were removed from females in the lactating maternal and pups removed groups between 10∶00 and 12∶00 h. Brains from age-matched virgin females were collected on the same day, during the same time period. Dissections were alternated among groups so that an equal number of dissections from each group were performed. Animals were lightly anesthetized with isoflurane and decapitated. After decapitation, vaginal lavage was performed on virgin and pups removed females to determine their estrous state. All females in the pups removed group were diestrous, while virgin females exhibited variance in est | 2.233535 | EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample |
Arctic sea ice falls to record low. Global warming?
The decline in sea ice coincides with warming at the top of the world that has been occurring twice as fast there as it has for the northern hemisphere as a whole as the global climate warms.
By, Staff writer
- close
A polar bear in the Arctic National Wildlife Refuge in Alaska. Arctic sea ice has reached its lowest summer extent since satellites first began keep in track in 1979
View Caption
Earth's icy skull cap, floating atop the Arctic Ocean, has reached its lowest summer extent since satellites first began keep in track in 1979, and by some estimates its lowest reach in nearly 1,500 years.
As of Sept. 7, the Arctic Ocean's expanse of summer ice this month spanned less than 1.54 million square miles, nearly six times the size of Texas and some 45 percent less than for the average for the same month through the 1980s and '90s, according to the National Snow and Ice Data Center in Boulder, Colo. And the ice is still retreating; the summer melt season typically ends in mid to late September.
The previous record low was set in 2007, a result of an unusual set of conditions – clear skies during most of the summer and wind patterns that drove large amounts of ice past Greenland and into the North Atlantic. This summer, no such "perfect storm" for ice loss appeared.
Recommended: Gallery Rising seas
Instead, much of the ice left over from winter – coming out of a summer that until now had been the second lowest melt-back in the satellite record – was thin enough to break no matter which way the wind blew, according to NSIDC researchers.
Are you scientifically literate? Tak e our quiz
Indeed, the ice hit hardest by the long-term decline is the thick ice that once survived several years of thaw and freeze. With more of the Arctic Ocean starting the freeze season as open water, an increasing proportion of winter ice heading into the melt season is relatively thin – more vulnerable to wind-driven break-up when the melt season returns, which can speed melting.
The summer sea-ice cover at the end of the melt season has been declining since the early 1970s, although since 1979 satellites have provided the most consistent measurements of the decline.
The decline coincides with warming at the top of the world that has been occurring twice as fast there as it has for the northern hemisphere as a whole as the global climate warms. Climate scientists attribute the general warming to rising concentrations of atmospheric greenhouse gases – mainly carbon dioxide – from burning fossil fuels since the start of the Industrial Revolution as well as from land-use changes.
The higher pace of Arctic warming, linked in part to rising greenhouse gases as well as to the interplay between ice, snow, and ocean that is reinforcing the warming trend, has implications for more than caribou and polar bears.
This so-called Arctic amplification increases the likelihood of severe weather at mid-latitudes in the northern hemisphere, where most people live, according to a study published earlier this year in the journal Geophysical Research Letters
"The Arctic is warming so much faster than mid-latitudes, and it's that difference in temperature that drives the jet stream," a river of air that triggers and steers storms, says Jennifer Francis, an atmospheric scientist at Rutgers University who focuses much of her research on the Arctic and is the lead author on the study.
As the temperature difference shrinks, the jet stream's speed slows and the north-south meanders it makes as it snakes from west to east grow longer. Both changes slow the jet stream's pace, contributing to the blocking patterns that lead to persistent bouts of heat, cold, or precipitation.
In the fall, the heat the Arctic Ocean stores in summer is released as the air above it gets colder. This slows the return of sea ice. And it reduces the temperature contrast between Arctic and mid latitudes, Dr. Francis explains, contributing to the blocking patterns that can appear in the late fall and winter. The jet streams' elongated meanders can bring one storm after another to parts of the continent while keeping other parts relatively storm-free. And the slowdown in the jet stream's migration across the hemisphere sets up the blocking patterns that can hold those conditions in place for weeks.
In the spring and summer, a different process can reduce the temperature difference between latitudes, she continues, one she says that has received far less attention than declining summer sea ice.
"In the last few years, we've also had record-low snow amounts in June and July on land at high latitudes," she says, resulting from what she calls a very robust trend toward earlier spring weather that melts the snow.
The land dries out sooner, with less moisture available to evaporate and keep a rein on rising temperatures. Air over the land areas warms sooner, reducing the temperature contrast into the spring and summer.
The effect on temperatures from a landscape deprived of its normal supply of rain or snow has been operating on overdrive this year, bringing severe to exceptional drought to much of the US.
Francis adds that the blocking pattern that kept the center of the US virtually rainfall-free, held temperatures over Greenland high enough to trigger melting across the entire top of the ice sheet, and gave Britain a persistently dreary, rainy summer, is consistent with the effects she and her colleague, Stephen Vavrus, a climate scientist at the University of Wisconsin at Madison, identify in their study.
Are you scientifically literate? Take our quiz
Share this story:
Make a Difference
Follow Stories Like This
Get the Monitor stories you care about delivered to your inbox. | 2.23586 | Zyphra/Zyda-2 |
) \
f(Inner, internal, FIELD_NORMAL, ##__VA_ARGS__) g() \
f(unsigned int, width, FIELD_NORMAL, ##__VA_ARGS__) g()
#define sandbox_fields_reflection_mylib_allClasses(f, ...) \
f(Foo, mylib, ##__VA_ARGS__)
...
Since Foo has an Inner struct, though, we'll need to also provide a definition for this struct if we want to access the val value:
...
#define sandbox_fields_reflection_mylib_class_Inner(f, g, ...) \
f(int, val, FIELD_NORMAL, ##__VA_ARGS__) g()
#define sandbox_fields_reflection_mylib_class_Foo(f, g, ...) \
f(unsigned char[5], status_array, FIELD_NORMAL, ##__VA_ARGS__) g() \
f(Inner, internal, FIELD_NORMAL, ##__VA_ARGS__) g() \
f(unsigned int, width, FIELD_NORMAL, ##__VA_ARGS__) g()
#define sandbox_fields_reflection_mylib_allClasses(f, ...) \
f(Inner, mylib, ##__VA_ARGS__) \
f(Foo, mylib, ##__VA_ARGS__)
...
Each struct file is intended to hold all struct definitions associated with a library.
Danger
The compiler currently doesn't catch type mismatches, missing members, or incorrectly ordered members in the struct definition.
Take a look at the ogg struct layout in Firefox for a complete example.
In the future we will likely generate these kinds of files automatically.
3.2 Invoking varargs functions
RLBox does not (yet) support calling functions with variable arguments currently. So, if the library you are sandboxing has APIs with variable arguments you need to monomorphize the usages. Specifically, you need to create wrapper functions for each usage you wish to expose. Consider the following example:
// Original library function:
int example_call(int x, int y, ...);
...
// Original invocations in application code:
rv = example_call(x, y, RESET_FLAG);
rv = example_call(x, y, INVERT_FLAG, 'c');
One way to handle this in RLBox is to expose two functions as follows:
int example_call_reset(int x, int y, uint8_t flag) {
return example_call(x, y, flag);
}
int example_call_invert(int x, int y, uint8_t flag, char c) {
return example_call(x, y, flag, c);
}
Then, you can call them as usual:
rv = sandbox.invoke_sandbox_function(example_call_reset, x, y, RESET_FLAG);
rv = sandbox.invoke_sandbox_function(example_call_invert, x, y, INVERT_FLAG, 'c');
3.3 Invoking C++ functions in a sandbox
RLBox does not currently let you invoke C++ library functions directly. Instead you need to expose a C ABI. Functions are largely straightforward. Methods require a bit of work since the receiver object is implicit: The simplest way to do this for class methods is to expose C functions and just pass a pointer to the receiver object as the first argument.
3.4 Passing application pointers into the sandbox with app_pointer
It's sometimes useful to pass application pointers into the sandbox. For example, you may need to pass a pointer to the receiver (this) so sandbox library can pass this pointer back in a callback. We can't trust the sandbox with actual applications pointers. RLBox instead provides a level of indirection.
If you want to pass a pointer into the sandbox you can use the get_app_pointer() <get_app_pointer> API, which returns an app_pointer. For example, in the expat library we need to pass a pointer to the receiver:
mAppPtr = mSandbox->get_app_pointer(static_cast<void*>(this));
// convert the app_pointer to tainted
tainted_expat<void*> t_driver = mAppPtr.to_tainted();
// call function as usual:
mSandbox->invoke_sandbox_function(..., t_driver);
where mAppPtr is a member of the class:
app_pointer_expat<void*> mAppPtr;
Internally, RLBox keeps a map between app pointers and the corresponding tainted pointers exposed to the sandbox. This lets you lookup the pointers in callbacks, for example:
void callback(rlbox_sandbox_expat& aSandbox, ... tainted_expat<void*> t_driver) {
nsExpatDriver* self = aSandbox.lookup_app_ptr(rlbox::sandboxstatic_cast<nsExpatDriver*>(t_driver));
Like callbacks, you need to keep app pointers alive and unregister them when you're done:
mAppPtr.unregister();
4 Additional material
Here is some additional material on how to use RLBox.
- A good next step after this tutorial is to get hands-on migrating an application to using a library that you want to sandbox. The simple library example repo is a "toy" application that uses a potentially "buggy" library. Try migrating the application to use the RLBox API based on what you've learnt in this tutorial. The solution is available in the solution folder in the same repo.
- Here is an alternate short tutorial on using the RLBox APIs. Note that this tutorial uses an alternate RLBox sandbox plugin (which uses the Lucet Wasm compiler and RLBox plugin rather than the wasm2c based plugin recommended in this tutorial, but this does not affect the use of the RLBox APIs themselves).
- Another useful example of using the RLBox APIs is the RLBox test suite itself.
- You can also see usage of the RLBox APIs in the Firefox browser by using the Firefox code search.
- Finally, the academic paper discussing the development of RLBox and its use in Firefox [RLBoxPaper] at the USENIX Security conference 2020 and the accompanying video explanations are a good way to get an overview of RLBox. | 1.085402 | EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample |
____________________| at
|________________________| s
|________________________| t
|______________________| on
|_____________________| all
|___________________| this
|___________________| for
|___________________| had
|__________________| but
|_________________| be
|_________________| not
|________________| they
|________________| so
The winner:
Shortest solution (by character count, per language). Have fun!
Edit: Table summarizing the results so far (2012-02-15) (originally added by user Nas Banov):
Language Relaxed Strict
========= ======= ======
GolfScript 130 143
Perl 185
Windows PowerShell 148 199
Mathematica 199
Ruby 185 205
Unix Toolchain 194 228
Python 183 243
Clojure 282
Scala 311
Haskell 333
Awk 336
R 298
Javascript 304 354
Groovy 321
Matlab 404
C# 422
Smalltalk 386
PHP 450
F# 452
TSQL 483 507
The numbers represent the length of the shortest solution in a specific language. "Strict" refers to a solution that implements the spec completely (draws |____| bars, closes the first bar on top with a ____ line, accounts for the possibility of long words with high frequency etc). "Relaxed" means some liberties were taken to shorten to solution.
Only solutions shorter then 500 characters are included. The list of languages is sorted by the length of the 'strict' solution. 'Unix Toolchain' is used to signify various solutions that use traditional *nix shell plus a mix of tools (like grep, tr, sort, uniq, head, perl, awk).
share
locked by animuson Nov 15 '14 at 23:38
This question exists because it has historical significance, but it is not considered a good, on-topic question for this site, so please do not use it as evidence that you can ask similar questions here. This question and its answers are frozen and cannot be changed. More info: help center.
closed as off topic by Ian, Iswanto San, A. Rodas, ebohlman, Steven Penny Apr 13 '13 at 0:36
Questions on Stack Overflow are expected to relate to programming within the scope defined by the community. Consider editing the question or leaving comments for improvement if you believe the question can be reworded to fit within the scope. Read more about reopening questions here. If this question can be reworded to fit the rules in the help center, please edit the question.
4
Well, 'longest bar'+word=80 may not fit within 80 cols if second-most-common-word is a much longer word. Am looking for the 'max constraint' I guess. – Brian Jul 2 '10 at 21:04
1
Do we normalize casing? 'She' = 'she'? – Brian Jul 2 '10 at 21:04
2
IMO making this perform, both in terms of execution time and memory usage, seems like a more interesting challenge than character count. – Frank Farmer Jul 2 '10 at 22:17
81
I'm glad to see that my favorite words s and t are represented. – indiv Jul 2 '10 at 22:23
8
@indiv, @Nas Banov -- silly too-simple tokenizer reads "didn't" as {didn, t} and "she's" as {she, s} :) – hobbs Jul 3 '10 at 6:40
59 Answers 59
LabVIEW 51 nodes, 5 structures, 10 diagrams
Teaching the elephant to tap-dance is never pretty. I'll, ah, skip the character count.
labVIEW code
results
The program flows from left to right:
labVIEW code explained
share
10
It IS not worth it – user216441 Jul 4 '10 at 6:18
4
LabVIEW's very happy in its hardware control and measurement niche, but really pretty awful for string manipulation. – Joe Zoller Jul 4 '10 at 6:23
2
No 3D yet? ... :D – belisarius Jul 5 '10 at 4:50
19
Best code golf answer I've seen. +1 for thinking outside the box! – Blair Holloway Jul 6 '10 at 1:48
1
Gotta count the elements for us...every box and widget you had to drag to the screen counts. – dmckee Jul 6 '10 at 5:52
Ruby 1.9, 185 chars
(heavily based on the other Ruby solutions)
w=($<.read.downcase.scan(/[a-z]+/)-%w{the and of to a i it in or is}).group_by{|x|x}.map{|x,y|[-y.size,x]}.sort[0,22]
k,l=w[0]
puts [?\s+?_*m=76-l.size,w.map{|f,x|?|+?_*(f*m/k)+"| "+x}]
Instead of using any command line switches like the other solutions, you can simply pass the filename as argument. (i.e. ruby1.9 wordfrequency.rb Alice.txt)
Since I'm using character-literals here, this solution only works in Ruby 1.9.
Edit: Replaced semicolons by line breaks for "readability". :P
Edit 2: Shtééf pointed out I forgot the trailing space - fixed that.
Edit 3: Removed the trailing space again ;)
share
2
That looks really maintainable. – Zombies Jul 14 '10 at 18:43
GolfScript, 177 175 173 167 164 163 144 131 130 chars
Slow - 3 minutes for the sample text (130)
{32|.123%97<n@if}%]''*n%"oftoitinorisa"2/-"theandi"3/-$(1@{.3$>1{;)}if}/]2/{~~\;}$22<.0=~:2;,76\-:1'_':0*' '\@{"
|"\~1*2/0*'| '@}/
Explanation:
{ #loop through all characters
32|. #convert to uppercase and duplicate
123%97< #determine if is a letter
n@if #return either the letter or a newline
}% #return an array (of ints)
]''* #convert array to a string with magic
n% #split on newline, removing blanks (stack is an array of words now)
"oftoitinorisa" #push this string
2/ #split into groups of two, i.e. ["of" "to" "it" "in" "or" "is" "a"]
- #remove any occurrences from the text
"theandi"3/-#remove "the", "and", and "i"
$ #sort the array of words
(1@ #takes the first word in the array, pushes a 1, reorders stack
#the 1 is the current number of occurrences of the first word
{ #loop through the array
.3$>1{;)}if#increment the count or push the next word and a 1
}/
]2/ #gather stack into an array and split into groups of 2
{~~\;}$ #sort by the latter element - the count of occurrences of each word
22< #take the first 22 elements
.0=~:2; #store the highest count
,76\-:1 #store the length of the first line
'_':0*' '\@ #make the first line
{ #loop through each word
"
|"\~ #start drawing the bar
1*2/0 #divide by zero
*'| '@ #finish drawing the bar
}/
"Correct" (hopefully). (143)
{32|.123%97<n@if}%]''*n%"oftoitinorisa"2/-"theandi"3/-$(1@{.3$>1{;)}if}/]2/{~~\;}$22<..0=1=:^;{~76@,-^* | 1.047631 | EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample |
More Moon and Venus
There's a thin crescent Moon in the western sky this evening. It stands a little above Venus, the radiant "evening star." Sunlight illuminates only about a tenth of the lunar hemisphere that faces our way — a bare sliver against the fading color of twilight.
As the nights pass by, though, that sliver will grow larger as the Sun casts its light on an increasing fraction of that hemisphere of the Moon. It'll light up exactly half of the Moon just about the time the clock strikes midnight on New Year's Eve — a point in the Moon's cycle known as first quarter.
The Moon will continue to grow fatter over the following week, as it passes through its "gibbous" phase — when the Sun illuminates more than half of the visible lunar surface, but not quite all. That phase ends when the Moon is "full" on the night of January 8th. After that, the phases run in reverse, with darkness engulfing a larger slice of the lunar disk until the Moon is "new" on the night of the 22nd.
That whole cycle is the result of the Moon's orbit around Earth. As it circles our planet, the viewing angle between Earth, Sun, and Moon is constantly changing. At full Moon, the Moon lines up opposite the Sun in our sky. And at new Moon, it lines up between Earth and the Sun. And when the Moon is an evening crescent, it's just starting to pull away from the Sun.
So over the next few nights, watch the Moon as it gets fatter and sets later — a beautiful process that repeats month after month.
Script by Damond Benningfield, Copyright 2011
For more skywatching tips, astronomy news, and much more, read StarDate magazine. | 1.814292 | HuggingFaceFW/fineweb-edu |
First United Methodist Church
Owner: First United Methodist Church
Architect: HBL Architects, Inc.
Square Footage: 16,300 sq. ft.
This project consisted of a new 500 seat worship area, a welcome center, and classrooms, all connected seamlessly with the existing church building. The worship area was a large open space with sloping and hipped roofs and a significant clerestory to allow for large quantities of natural light, all accentuated by 60-ft tall tower feeding additional natural light into the worship area. The tower roof is topped by 20-ft of unique framing supporting a crucifix. A new porte cochere was also included in this design.
The large, open clerestory provided a significant challenge on this project. The high roofs and vertical sides of the clerestory had to span 86-ft to create the open worship area. Pinnacle designed 16-ft deep trusses to span this distance and minimize the weight and cost of the structural members. Spanning 45-ft between these trusses were additional scissors trusses, creating the sloped ceiling and allowing for the abundance of natural light. Additional complication stemmed from the varying roof slopes, hips, and valleys, and the four unique roof framing plans that were required. Also, the number of windows throughout the building envelope effectively eliminated the possibility of using braced frames, which resulted in the full lateral capacity being developed in a series of multistory moment frames.
The roof structure consisted of open web steel joists framing between wide flange steel beams, covered with 1 1/2" metal roof deck. The large trusses above the worship area were designed using top and bottom chord wide flange members and tube steel web members. The curved from entry area was constructed using cantilevered wide flange beams of varying lengths. The entire lateral system consisted of wide flange moment frames along six exterior faces of the building. The foundations were designed using 9-ft deep bell bottom piers and grade beams. | 1.245354 | m-a-p/FineFineWeb |
Figure TS-4: Ranges of percentage changes in crop yields (expressed in vertical extent of vertical bars only) spanning selected climate change scenarioswith and without agronomic adaptationfrom paired studies listed in Table 5-4. Each pair of ranges is differentiated by geographic location and crop. Pairs of vertical bars represent the range of percentage changes with and without adaptation. Endpoints of each range represent collective high and low percentage change values derived from all climate scenarios used in the study. The horizontal extent of the bars is not meaningful. On the x-axis, the last name of the lead author is listed as it appears in Table 5-4; full source information is provided in the Chapter 5 reference list.
The response of crop yields to climate change varies widely, depending on the species, cultivar, soil conditions, treatment of CO2 direct effects, and other locational factors. It is established with medium confidence that a few degrees of projected warming will lead to general increases in temperate crop yields, with some regional variation (Table 5-4). At larger amounts of projected warming, most temperate crop yield responses become generally negative. Autonomous agronomic adaptation ameliorates temperate crop yield loss and improves gain in most cases (Figure TS-4). In the tropics, where some crops are near their maximum temperature tolerance and where dryland agriculture predominates, yields would decrease generally with even minimal changes in temperature; where there is a large decrease in rainfall, crop yields would be even more adversely affected (medium confidence). With autonomous agronomic adaptation, it is established with medium confidence that crop yields in the tropics tend to be less adversely affected by climate change than without adaptation, but they still tend to remain below baseline levels. Extreme events also will affect crop yields. Higher minimum temperatures will be beneficial to some crops, especially in temperate regions, and detrimental to other crops, especially in low latitudes (high confidence). Higher maximum temperatures will be generally detrimental to numerous crops (high confidence). [5.3.3]
Important advances in research since the SAR on the direct effects of CO2 on crops suggest that beneficial effects may be greater under certain stressful conditions, including warmer temperatures and drought. Although these effects are well established for a few crops under experimental conditions, knowledge of them is incomplete for suboptimal conditions of actual farms. Research on agricultural adaptation to climate change also has made important advances. Inexpensive, farm-level (autonomous) agronomic adaptations such as altering of planting dates and cultivar selections have been simulated in crop models extensively. More expensive, directed adaptations such as changing land-use allocations and developing and using irrigation infrastructurehave been examined in a small but growing number of linked crop-economic models, integrated assessment models, and econometric models.
Degradation of soil and water resources is one of the major future challenges for global agriculture. It is established with high confidence that those processes are likely to be intensified by adverse changes in temperature and precipitation. Land use and management have been shown to have a greater impact on soil conditions than the indirect effect of climate change; thus, adaptation has the potential to significantly mitigate these impacts. A critical research need is to assess whether resource degradation will significantly increase the risks faced by vulnerable agricultural and rural populations [5.3.2, 5.3.4, 5.3.6].
In the absence of climate change, most global and regional studies project declining real prices for agricultural commodities. Confidence in these projections declines farther into the future. The impacts of climate change on agriculture are estimated to result in small percentage changes in global income, with positive changes in more developed regions and smaller or negative changes in developing regions (low to medium confidence). The effectiveness of adaptation (agronomic and economic) in ameliorating the impacts of climate change will vary regionally and depend a great deal on regional resource endowments, including stable and effective institutions. [5.3.1, 5.3.5]
Most studies indicate that mean annual temperature increases of 2.5ºC or greater would prompt food prices to increase (low confidence) as a result of slowing in the expansion of global food capacity relative to growth in global food demand. At lesser amounts of warming than 2.5ºC, global impact assessment models cannot distinguish the climate change signal from other sources of change. Some recent aggregated studies have estimated economic impacts on vulnerable populations such as smallholder producers and poor urban consumers. These studies indicate that climate change will lower the incomes of vulnerable populations and increase the absolute number of people at risk of hunger (low confidence). [5.3.5, 5.3.6]
Without autonomous adaptation, increases in extreme events are likely to increase heat stress-related livestock deaths, although winter warming may reduce neonatal deaths at temperate latitudes (established but incomplete). Strategies to adapt livestock to physiological stresses of warming are considered effective; however, adaptation research is hindered by the lack of experimentation and simulation. [5.3.3]
Confidence in specific numerical estimates of climate change impacts on production, income, and prices obtained from large, aggregated, integrated assessment models is considered to be low because there are several remaining uncertainties. The models are highly sensitive to some parameters that have been subjected to sensitivity analysis, yet sensitivity to a large number of other parameters has not been reported. Other uncertainties include the magnitude and persistence of effects of rising atmospheric CO2 on crop yield under realistic farming conditions; potential changes in crop and animal pest losses; spatial variability in crop responses to climate change; and the effects of changes in climate variability and extreme events on crops and livestock. [Box 5-3]
Other reports in this collection | 2.55358 | openbmb/Ultra-FineWeb |
Smoking is the single biggest risk factor for heart attacks in men and women
The most effective way to avoid a heart attack is to stop smoking altogether. Even one cigarette every day increases the risk of a heart attack by 50%. In addition to kicking the habit, smokers can reduce their overall cardiovascular risk by being physically active, eating a balanced diet, maintaining a healthy body weight, and controlling blood cholesterol and blood pressure.
More than 1 billion people in the world smoke, of which 80% live in low and middle-income countries. Each year, smoking kills five million people and passive smoking kills 600,000 people. Healthcare spending on smoking-related diseases is similarly significant and increasing; the burden of smoking cost the European economy over €500 billion in 2009.
Despite the massive reduction in cardiovascular disease in the last 30 years, it remains the main cause of death worldwide. Most cardiovascular disease is avoidable. A healthy lifestyle could prevent more than 80% of cardiovascular disease.
Smoking is lethal and even one cigarette a day is one too many. Smoking one cigarette every day increases the risk of a heart attack by 50%. Heavier smokers have an even higher risk. Any amount of smoking is harmful, and quitting is the only healthy option.
Cravings for a cigarette usually last 3 to 5 minutes. If you can get over those few minutes, you are well on the way to not having that cigarette. The 4 D's can help you do that.
- Delay: wait at least 3 minutes; the urge will pass
- Drink: water or juice
- Distract yourself: move away from the situation, and do something different
- Take deep breaths: breathe slowly and deeply, in through your nose and out through your mouth.
The most effective way for smokers to reduce their chance of a heart attack is to kick the habit. Risk can be reduced further by being physically active, eating a balanced diet, maintaining a healthy body weight, and controlling blood cholesterol and blood pressure. | 1.979308 | openbmb/Ultra-FineWeb |
It's hard to find a positive about a recession like the one in which the entire globe has found itself. Governments, strapped for cash, have been forced to declare bankruptcy, research has found that young people who enter the workforce during a recession take 10 years to make up for the wages that they had missed and, perhaps most importantly, people are out of work.
Now research has found yet another negative about a slouching economy: recessions can shorten your life. They found that the results are particularly pronounced for people in their 50s and 60s when a recession hits.
Courtney C. Coile, Phillip B. Levine, and Robin McKnight from Wellesley College examined mortality data between 1969 and 2008. Specifically, they looked at those Americans who were nearing retirement age as a recession hit. Though people younger than 57 and over 61 did not seem to be hit in a statistically significant way, researchers found that people between the ages of 57 and 61 were less likely to live into old age.
Their research held particularly true for people in that age group who had found themselves unemployed. A 58-year-old who found himself unemployed could expect as much as three years shaved off of his or her lifespan.
Researchers are not precisely sure why that happens, but they offer a few explanations. Older workers are more likely to suffer long bouts of unemployment during a recession, so are more likely to skip medical treatments due to cost. This, then, could have health ramifications later in life.
Adults over the age of 62 do not have to make the same health sacrifices, because they can rely on social security and, later on, Medicare to make up any income and health care they may have lost by losing a job.
This news could be particularly dire for the Baby Boomers, who are a disproportionate number of the long-term unemployed during this recession. | 1.594047 | openbmb/Ultra-FineWeb |
The empirical role of social capital on urban transformation: A case study of Istanbul
Free (open access)
Volume 10 (2015), Issue 3
281 - 300
Ö. ÖZÇEVIK & P. TAN
The purpose of this study is to test the validity of the social capital knowledge that belongs to the local business community as an instrument for the formation of the initial strategies of urban transformation and to test the effect of independent variables in the formation of such social capital. This study takes as its point of departure the recently increasing interest in the role of social capital in planning and development, and the need for access to embedded knowledge in the sites of urban transformation. The lack of field data makes managing implementations of urban transformation difficult, and these implementations are not supported by appropriate policies. It is important to study this issue in İstanbul, which is going through the process of urban transformation and harbors a variety of resources for social capital due to its unique conditions. The hypothesis of this study, which aims to contribute to research being conducted in the field, is that the levels of security, belonging, awareness, and expectations inherent in social capital can change according to the profile of the small business community and according to the characteristics of the physical capital in sites of urban transformation. In this study, conducted in 2012, data from the neighborhood of Çeliktepe were collected using 'mixed methods social research.'
Social capital, local networks, small businesses, urban transformation, mixed methods social research. | 1.565593 | openbmb/Ultra-FineWeb |
| Risk Factors
Deafness means complete hearing loss. Partial loss of hearing is often called hearing loss rather than deafness.
Deafness can occur in one or both ears.
There are three primary types of hearing loss:
- Conductive—hearing loss caused by the inability of the sound to reach the inner ear
- Sensorineural—hearing loss caused by disorders of the inner ear or auditory nerve. This type of loss is usually permanent.
- Mixed—hearing losses that are a combination of both conductive and sensorineural loss
Copyright © Nucleus Medical Media, Inc.
The conditions that can cause or be associated with hearing loss include the following:
- Ear infections
- Middle ear fluid
- Hole in the ear drum
- Trauma, including birth trauma
Nose or throat problems, such as:
- Nasal allergies
- Sinus problems
- Blockage of the tubes leading from the ears to the throat
- Family history
Ear disorders, such as:
Infections, such as:
Bacterial infections, such as:
Tumors involving the:
Neurological disorders, such as:
Ototoxic drugs that damage the ear, such as:
- Aspirin—usually reverses when aspirin is stopped
- Quinine—usually reverses when quinine is stopped
- Certain antibiotics—usually is
reversible when stopped
Deafness may occur at any age. Risk factors that increase your chances of deafness include:
- Premature birth
- Increased age
- Taking ototoxic medications
Exposure to loud noise on the job, such as:
- Loud industrial noise
- Use of heavy equipment
- Being a musician
Exposure to recreational loud noise, such as:
- Guns used during target practice
- Loud music
Hearing loss usually comes on gradually, but may come on suddenly. Symptoms may include:
- Difficulty hearing
Ringing in the ears, also called
- A sensation of spinning
- Ear pain
- Feeling of ear fullness, such as that caused by earwax or fluid
Symptoms of deafness in infants may be noted at these stages:
- 1 to 4 months: lack of response to sounds or voices
4 to 8 months:
- Disinterest in musical toys
- Lack of verbalization, such as babbling, cooing, making sounds
- 8 to 12 months: lack of recognition of child's own name
- 12 to 16 months: lack of speech
All children, including newborns, should be screened for hearing loss.
Your doctor will ask about your symptoms and medical history. A physical exam will be done. As part of the diagnosis, your doctor may try to determine the following:
- Location of the problem
- Degree of loss
- Cause—not always possible to identify the exact cause of hearing loss; this information can help guide treatment
Your ears may be tested. This can be done with:
- A brainstem auditory evoked response test
- Bone vibrator—also called a tuning fork test
—also called a hearing test
Images may be taken of your bodily structures. This can be done with:
Treatment for deafness depends on the type of hearing loss. Options may include:
- Medical treatment, such as removal of earwax or use of antibiotics to treat an ear infection
- In selected cases of sudden hearing loss, medical treatment with steroids may be effective.
- Hearing aids to help amplify sounds
Surgery, such as:
- Stapedectomy—for treatment of otosclerosis
—for a perforated eardrum
- Tympanoplasty tubes—for persistent middle ear infections or fluid
- Cochlear implant
—a surgically implanted electronic device that helps provide sound to a person with severe sensorineural hearing loss. Although the devices do not completely restore hearing, improvements in implant technology continue to be made.
- Learning sign language or lip reading to improve communication skills
To help prevent deafness, avoid loud noise. In cases when loud noise cannot be avoided, you can reduce exposure to loud noises by wearing ear protection. Also, taking steps to reduce injuries or disease may prevent certain types of deafness.
There is currently no effective way to prevent congenital or genetic deafness.
Hearing screening for newborns can help ensure that hearing loss in young babies is detected and treated at the earliest possible stage.
Deafness and hearing loss. World Health Organization website. Available at:
_URL_ Accessed September 20, 2013.
Deafness and hearing loss research. The Scripps Research Institute website. Available at:
_URL_ Accessed September 20, 2013.
Hearing, ear infections, and deafness. National Institute on Deafness and Other Communication Disorders website. Available at:
_URL_ Updated September 8, 2013. Accessed September 20, 2013.
Hearing loss. American Speech-Language-Hearing Association website. Available at:
_URL_ Accessed September 20, 2013.
Plaza G, Herráiz C. Intratympanic steroids for treatment of sudden hearing loss after failure of intravenous therapy.
Otolaryngol Head Neck Surg. 2007 Jul;137(1):74-8.
What is hearing loss? NIH SeniorHealth website. Available at:
_URL_ Accessed September 20, 2013.
Last reviewed September 2013 by Michael Woods, MD
Please be aware that this information is provided to supplement the care provided by your physician. It is neither intended nor implied to be a substitute for professional medical advice. CALL YOUR HEALTHCARE PROVIDER IMMEDIATELY IF YOU THINK YOU MAY HAVE A MEDICAL EMERGENCY. Always seek the advice of your physician or other qualified health provider prior to starting any new treatment or with any questions you may have regarding a medical condition.
Copyright © EBSCO Publishing. All rights reserved. | 2.422432 | m-a-p/FineFineWeb |
Risk and (Human-induced) Climate Change
Professor Watson's career has evolved from research scientist at the Jet Propulsion Laboratory: California Institute of Technology, to a US Federal Government program manager/director at the National Aeronautics
and Space Administration (NASA), to a scientific/policy advisor in the US Office of Science and Technology Policy (OSTP), White House, to a scientific advisor, manager and chief scientist at the World Bank, to a
Chair of Environmental Sciences at the University of East Anglia, the Director for Strategic Direction for the Tyndall centre, and Chief Scientific Advisor to the UK Department of Environment, Food and Rural Affairs. In parallel to his formal positions he has chaired, co-chaired or directed international scientific, technical and economic assessments of stratospheric ozone depletion, biodiversity/ecosystems (the GBA and MA), climate change (IPCC) and agricultural S&T (IAASTD). Professor Watson's areas of expertise include managing and coordinating national and international environmental programs, research programs and
assessments; establishing science and environmental policies - specifically advising governments and civil society on the policy implications of scientific information and policy options for action;
and communicating scientific, technical and economic information to policymakers. During the last twenty years he has received numerous national and international awards recognizing his contributions to
science and the science-policy interface, including in 2003 - Honorary "Companion of the Order of Saint Michael and Saint George" from the United Kingdom.
The question is not whether the Earth's climate will change in response to human activities, but when, where and by how much. Human activities are changing the Earth's climate and further human-induced climate change is inevitable. Indeed the climate of the next few decades will be governed by past emissions. The most adverse consequences of human-induced climate change will be in developing countries and poor people within them. Climate change threatens to bring more suffering to the one billion people who already go to bed hungry every night and the approximately 2 billion people exposed to insect-borne diseases and water scarcity. Sea level rise threatens to displace tens of millions of people in deltaic areas and low-lying small island states. Climate change will undermine the ability of many poor people to escape poverty and the long-term sustainable economic development of some countries. Hence, climate change is not only an environmental issue, but a development and security issue.
The challenge is to limit the magnitude and rate of human-induced climate change, and simultaneously reduce the vulnerability of socio-economic sectors, ecological systems and human health to current and projected climate variability by integrating climate concerns into local and national economic planning.
Technological options for reducing greenhouse gas emissions cost-effectively over the next few decades already exist. However, the required transition to a very low carbon economy (a reduction in global emissions by at least 50% by 2050) will require a technological evolution in the production and use of energy, energy sector reform, appropriate pricing policies and behavior change, coupled with a more sustainable agricultural sector and reduced deforestation. This transition to a low-carbon economy must be achieved while improving access to affordable energy in developing countries, which is critical for economic growth and poverty alleviation, and while ensuring adequate affordable and nutritious food.
The challenge is to negotiate a long-term (up to 2050) global regulatory framework that is equitable with common but differentiated responsibilities and has intermediate targets that can reduce greenhouse emissions to a level that limits the increase in global mean surface temperature to 2oC above pre-industrial levels. While this goal has been widely accepted, the current rate of growth in emissions globally, coupled with a failure in Copenhagen to agree to stringent targets to reduce emissions, makes this goal extremely difficult, hence the world needs to be prepared to adapt to a 4oC warmer world. | 2.088831 | openbmb/Ultra-FineWeb |
We can define pulmonary embolism as a blockage in either one or more arteries leading to or in the lungs caused due to embolus, or a clot. In almost each case, clot originates in deep vein inside the pelvis, arms, or legs, breaks loose, and then travels to lungs. Depending on the size, blood clot obstructs either a small or large pulmonary artery and then blocks the blood flow through that vessel.
Risk Factors for Deep Vein Thrombosis Pulmonary Embolism
A lot of Pulmonary Embolism Risk Factors are there. People having more than a single risk factor simultaneously are at even bigger risk. Immobility (for example, following surgery or an injury) and blood clot disorders (called hypercoagulable or thrombophilia state) are main risk factors. Most common kind of genetic thrombophilia is the factor V Leiden that also increases the risks for pregnancy complication.
Other factors which increase the risks for DVT include:
- Cancer and its treatment
- Pregnancy and postpartum period
- Hormone therapy (for example birth control pills)
- Varicose veins
- Sitting for long period of time (e.g., on a plane, in the car)
DVT Pulmonary Embolism Symptoms and Signs
Symptoms of DVT differ, depending on the severity and location of blood clot. In almost 50% of the patients who have this condition, DVT tends to be asymptomatic (i.e., doesn't cause symptoms). In a few cases, the patients aren't aware that they've DVT until blood clot travels onto the lung and then causes pulmonary embolism.
Symptoms of DVT include the following:
- Pain or tenderness
- Swelling (edema)
- Redness or discoloration
A few patients having DVT experience pain within the calf when their foot is flexed upwards (known as Homan's sign). But, this sign can be also associated with some other conditions and isn't present in all the patients with DVT.
Signs of pulmonary embolism consist of shortness of breath, cough, chest pain, and low fever (approximately 101°F). In a few cases, the patients who have pulmonary embolism do cough up blood (known as hemoptysis). The condition can also cause feelings of apprehension and restlessness, and irregular heart rate (known as arrhythmia).
What is the Pulmonary Edema?
In general, Edema means swelling. This occurs typically when fluid from in blood vessels seep outside the blood vessels into the surrounding tissue, causing swelling. It can happen either due to too much of pressure in blood vessel or not sufficient proteins in bloodstream to hold up to the fluid inside the plasma (the element of the blood which does not contain the blood cell).
Pulmonary edema causes when alveoli gets filled up with surplus fluid seeped out from the blood vessels inside the lung rather than air. It can cause trouble with exchange of gases (carbon dioxide and oxygen), resulting in breathing trouble and poor blood oxygenation. At times, this is referred to as the "water in lungs" whenever describing the conditions to the patients.
Pulmonary edema may be caused by lots of different factors. This can be related with heart failure, known as cardiogenic pulmonary edema, and related to the other causes, called as the non-cardiogenic's pulmonary edema. To know more Blood Clot Symptoms and it's treatment visit Pulmonary Emboli. | 2.370183 | m-a-p/FineFineWeb |
men who would attempt to give them freedom, would be their greatest enemies.Charles Pinckney's Speech to Congress, 1820.
The South also extended their legal argument against the Tallmadge amendment on the sovereignty and equality of the of the state (Woodburn, 1894). Quoting the second half Article, 4 Section 3, Clause 2 of the U.S. Constitution: "…nothing in this Constitution shall be so construed as to Prejudice any Claims of the United States, or of any particular State", along with the Tenth Amendment, they argued that the Constitution, by intentional omitting the slavery issue, has relinquished its claim to restriction. Slavery is an issue to be decided by individual states. Tallmadge amendment was unconstitutional because it placed the constraint on the admission of Missouri alone. Besides, since slaves were treated as property, banning slavery would amount to illegal seizure of personal property, prohibited by the Fifth Amendment.
Morals and Principles
I was aware of the delicacy of the subject and that I had learned from Southern gentlemen the difficulties and the dangers of having free blacks intermingling with slaves;… While we deprecate and mourn over the evil of slavery, humanity and good morals require us to wish its abolition, under circumstances consistent with the safety of the white population. Willingly, therefore, will I submit to an evil which we cannot safely remedy… But, sir, all these reasons cease when we cross the banks of the Mississippi, a newly acquired territory, never contemplated in the formation of our Government, not included within the compromise or mutual pledge in the adoption of our Constitution, a new territory acquired by our common fund, and ought justly to be subject to our common legislation.Tallmadge's Speech to Congress, 1819
The North did not see any attempt to end the slavery on the South's part. Instead, they saw a revival and the intention to extend and expand. For the Northern restrictionists, the future of slavery was at stake –not only in Missouri, but in all new states and territories. They sought to implement the Northern Ordinance to the regions west of Mississippi River and Florida, and put an end to the expansion of slavery for good.
They argued on humanitarian ground and maintained that the Declaration of Independence and the Constitution provided the ground for abolition. While some claimed that Article I Section 8 would suffice to restrict all slave trades, the most eloquent proponent of Tallmadge Amendment, New York Senator Rufus King, cites first part of Article, 4 Section 3, Clause 2: "The Congress shall have Power to dispose of and make all needful Rules and Regulations respecting the Territory or other Property belonging to the United States" and contend that the Congress was granted the power to dictate the condition of admission of each state. Slavery was an evil disgrace forced upon the colonies. This wickedness was only tolerated for the sake of the Union and should be restricted at the earliest expediency.
Almost all debates surrounding Missouri Crisis could be boiled down to the balance of powder, however. Despite dominating the House, North's demographic lead did not translate into their political sway. Equal number of Senators from each state gave the less populated South an unfair advantage. Maintaining the power balance depends on equal number of free and slave states.
The North had been pained by the three-fifth clause, which adds 60% of the slave population to the slave states' free population for calculating taxation and assigning House representative seats. Originally meant as a compromise to discourage the growth of slavery, the three-fifth clause gave southern states more congressional representatives and more electoral votes for president than their white population entitled.
At the time of Missouri's request of statehood, the nation contained 11 free states and 11 slave states at the time. Missouri's entry as a slave state would have tipped balance of power in South's favor; with the Tallmadge Amendment approved the antislavery stance would gain strength going forward. Aggravated by the dominance of "Slave Power," North feared that if Missouri's slave state status would solidify South's dominance. Worse still, it could lead to more slave states and perpetuate their reign in national politics.
Missouri renewed its request after the Congress reconvened; Maine also applied to join the union. The Senate amended the Maine admission with the unconditional acceptance of a slaveholding Missouri but the coercion was called out by Northern House representatives. Seeking incentivize the bill passage, Senator Jesse Thomas of Illinois added a proviso that allows slavery in Missouri, but "forever prohibits" slavery in all remaining areas of the Louisiana Purchase north of the 36° 30′ parallel, an area mostly uninhabited at the time. The bill passed the Senate and again rejected by the House.
In respond to the Senate's strong-arming, the House passed its own bill admitting Missouri with the antislavery Tallmadge Amendment. The bill was rejected by the Senate and the Congress came to a deadlock.
A sullen gloom hung over the nation. All felt that the rejection of Missouri, was equivalent to a dissolution of the Union: because those states which already had, what Missouri was rejected for refusing to relinquish, would go with Missouri.Abraham Lincoln, Eulogy of Henry Clay
The Senate called for a committee of conference the next day. Kentucky Senator Henry Clay, then Speaker of the House, also known as "The Great Compromiser," spearheaded the compromise effort. A slave owner himself, Clay had argued for "the inviolability of this species of property" granted by the Constitution and advocated "diffusion" and "colonization" as the ultimate and humane solution to slavery. He, along with Thomas Jefferson, James Madison and then-President James Monroe claimed that it was only humane to disperse South's surplus slave population westward, and expatriate them when free labor is plenty and slave-holding becomes unaffordable in time. This would defuse the tension among the dense and restless southern slave population and lessen the threat and stress of the slaveholders –while profiting through the domestic slave trade.
Clay's optimism about the compromise was soon threatened by the building momentum of the antislavery force. Fearing an impending all-out restriction on slavery, Clay rushed to forge a compromising majority by instilling the fear of disunion and accusing the Federalists of instigating and exploiting the Missouri issue to divide the nation.
We have been told by the Speaker that the people of Missouri are ready to shoulder their muskets, to march en masse, and force their way into this hall, Sir, if this be indeed so, it is time to barricade the doors. If it be an enemy that is advancing, let us bar our gates, and prepare for our defence;…
… But not only will Missouri revolt from our authority: the slave-holding states will join with her, and, if this restriction passes, the Union will be dissolved. Such, sir, is the language which I have heard, with infinite regret, upon this floor, not from two or three members merely, but from all those who have spoken against this amendment…
…respecting the motives of the friends of this restriction; and an appeal has been made to vulgar prejudices, by calling it a Federal measure; …it is well known that it originated with Republicans; that it is supported by the Republicans throughout the free states; and that the Federalists of the south are its warm opponents: The question then is not between Federalists and Republicans, but between slave-holders and those who hold no slaves. It is a knowledge of this fact, which has induced the free states, usually so much divided among themselves, to advance on this occasion with so much ardor and unanimity to the attainment of their object.Speech of Mr. Plumer, of New-Hampshire, on the Missouri question, delivered in the House of Representatives of the United States, February 21, 1820
Clay successfully convinced some Southern pro-slavery House representatives to accept the Thomas proviso and wrangled several Northern representatives to absent or support Missouri as a slave state. By dividing the Compromise into three bills, Henry Clay prevented the North and Southern opponents to join force to defeat the Senate bill.
On the same day, March 2, 1820, the joint committee, carefully chosen by Clay, returned with an endorsement of the original Senate compromise bill, now in three separate parts. Missouri was admitted as a slave state by a margin of three votes, Maine entered as a free state the day before its application expires, and slavery was prohibited north of 36° 30' parallel, the so-called "Compromise Line. The Missouri Compromise was thus achieved. Clay sneaked the bill to the Senate while blocking the House's reconsideration. President Monroe signed the bill on March 6, 1820.
He did not confine himself to speeches addressed to the House, but he went from man to man, expostulating, beseeching, persuading, in his most winning way… What helped in him gaining over the number of votes necessary to form a majority was the growing fear that this quarrel would break up the ruling party, and lead to the forming of new divisions.Carl Schurz, Life of Henry Clay
President Monroe's Role
President James Monroe was instrumental in fostering the compromise. "Monroe's endorsement of the Missouri Compromise was a last-ditch effort to defeat a budding antislavery movement that stood a few congressional votes shy of enacting the most meaningful national restrictions on slavery in a generation." (Hammand, 2019)
A Virginia slaveholder himself, he regarded the nation's interests aligned with the prosperity of the South, and concerned himself with maintaining the privilege and security of Virginia's planter class. Monroe deemed Virginia | 1.655512 | openbmb/Ultra-FineWeb |
Beijing Science and Technology Daily, July 9th (Reporter Liu Yuanyuan correspondent Feng Yi) Can eating more fruits and vegetables really prevent diabetes? The latest scientific research gives the answer.
The reporter learned from West Lake University that Professor Zheng Jusheng from the School of Life Sciences of this school, together with more than 40 nutritionists in Europe, proved from the perspective of blood nutritional markers that eating more fruits and vegetables is conducive to preventing diabetes (this article refers to type 2 diabetes).
The study concluded that if you eat 66 grams of fruits and vegetables every day, the risk of diabetes will be reduced by 25%. This provides valuable suggestions and references for dietary guidance in the field of public health. The research results were published online in the British Medical Journal (BMJ) on July 9th, Beijing time.
"We have tracked and recorded more than 10,000 cases of diabetes in eight European countries, including Britain, France, Germany, Italy, Spain and Denmark, and compared more than 13,000 healthy people. From the perspective of nutritional markers, we found that eating more fruits and vegetables really plays a positive role in preventing diabetes." As the first author of the paper, Zheng Yusheng said.
In this study, the research team recorded seven kinds of nutrients in the blood of the experimental population, including vitamin C and six kinds of carotene. These seven blood indexes have been proved to be nutritional markers corresponding to the effective intake of vegetables and fruits. Generally speaking, the more fruits and vegetables are consumed, the higher the content of these seven indicators in the human body.
Through regular measurement and tracking, the research team found that the higher the nutritional markers in the body, the lower the risk of diabetes, which shows that eating more fruits and vegetables can effectively reduce the risk of diabetes. Statistics show that the seven nutritional markers have increased by a standardized unit — — The risk of diabetes will be reduced by 25% if you consume 66 grams of fruits and vegetables every day.
According to reports, more than 10,000 diabetic cases and 13,000 healthy control groups were screened from more than 400,000 people through nearly 10 years of follow-up, so there are long-term data to support the reliability and stability of the conclusion.
"Many teams have done similar research before. Some teams use questionnaires, which may have the subjective will of the experimenters, so there may be errors in the experimental results. The samples of other research teams may be only a few hundred people, and the tracking time is relatively short, so the statistical data is not representative. This study has a large sample of participants and a long experimental time, and it is of great significance in the field of public health to verify the results through a more scientific method. " Zheng Yusheng said. | 2.436133 | m-a-p/FineFineWeb |
Mediatized Populisms| Digital Populism: Trolls and Political Polarization of Twitter in Turkey
This article analyzes political trolling in Turkey through the lens of mediated populism. Twitter trolling in Turkey has diverged from its original uses (i.e., poking fun, flaming, etc.) toward government-led polarization and right-wing populism. Failing to develop an effective strategy to mobilize online masses, Turkey's ruling Justice and Development Party (JDP/AKP) relied on the polarizing performances of a large progovernment troll army. Trolls deploy three features of JDP's populism: serving the people, fetish of the will of the people, and demonization. Whereas trolls traditionally target and mock institutions, Turkey's political trolls act on behalf of the establishment. They produce a digital culture of lynching and censorship. Trolls' language also impacts pro-JDP journalists who act like trolls and attack journalists, academics, and artists critical of the government. | 1.064763 | openbmb/Ultra-FineWeb |
Reading novels has been a good experience for young and old for many years. Reading can teach people many skills and can improve literacy skills and a persons general knowledge. However through generations reading has become a lot less common for many people. Now even young children rather interact with a tablet and read rather than being read a novel. The statement "reading novels is of no value anymore" is true to some extent. Reading is of value however reading novels is not you can gain the same enjoyment from reading other things as you can gain from novels, secondly people don't enjoy being forced to read and find it a burden and finally people read things other than novels to gain literacy skills.
Firstly I believe reading is of value however reading novels is not. A lot of people rely on technology now and rather read E-books than a novel. Reading online is also accessed a lot easier ...view middle of the document...
Secondly people to do not enjoy being forced to read. Some people find an escape in reading and get a good experience from reading, however when someone is forced to read this removes this experience which is why many young people do not enjoy reading as they don't get the experience. We are forced to read throughout schooling and encouraged to read at home and to continue reading outside of school. We get benefits from reading and throughout our childhood and schooling novels are easiest to access. As we grow up our parents tell us to read and they read to us often to help us but some children do not get the same enjoyment reading books as they do from reading something on a tablet that can also be interactive. For example young children are told to eat vegetables because they are good for them just like reading is good to do as well so it is encouraged. Reading is a major part of everyday life but novels do not need to be the one source of reading in our life.
Additionally not everyone enjoys novels so they would much rather read other things. Novels give good literature benefits but it is possible for someone to get the same benefits from reading a magazine, newspaper or an article online. As well as getting the same literature benefits you can also learn about things you are interested in rather than something that you do not enjoy at all. For example netball gives good health benefits but not everyone has to play netball to get those benefits, they can run, play soccer or even play basketball and still get the same benefits from other sports that you can get from netball.
Finally the statement "reading novels is of no value anymore" is true to some extent. Reading is of value however reading novels is not as you get the same experience reading an article or magazine, people do not enjoy being forced to read so it becomes a burden and people rather read a lot of other things to gain the same literacy skills. With the world becoming so technologically reliant and finding easier ways to do things, novels have become a thing of the past. So as we move on through life we should embrace the changes. | 2.192111 | m-a-p/FineFineWeb |
According to the latest report by IMARC Group, titled "Internet of Things (IOT) in Healthcare Market: Global Industry Trends, Share, Size, Growth, Opportunity and Forecast 2023-2028," the global internet of things (IOT) in healthcare market size reached US$ 277.8 Billion in 2022. The internet of things (IOT) in healthcare refers to a network of physical devices, appliances, and other objects that are embedded with sensors, software, and network to connect and exchange data with other devices and systems over the internet. It helps in automating tasks, optimizing workflows, and reducing the need for human intervention, which results in improved efficiency and productivity in the healthcare sector. It also reduces operational costs by optimizing resource utilization, reducing energy consumption, and improving equipment maintenance and uptime. Moreover, it minimizes waste, conserves energy, and encourages sustainability by monitoring and optimizing resource consumption. At present, IOT is gaining traction in healthcare for remote patient monitoring, predictive maintenance, and telehealth applications across the globe.
Global Internet of Things (IOT) in Healthcare Market Trends:
The burgeoning healthcare sector and integration of advanced technologies, such as artificial intelligence (AI) and machine learning (ML), with IOT for providing advanced solutions and improving operational efficiency, automation, and optimization, represent one of the key factors positively influencing the market across the globe. In addition, AI-enabled IOT devices assist healthcare providers in diagnosing and predicting ailments in real-time, which is bolstering the market growth. Moreover, the increasing prevalence of medical disorders and the growing adoption of smart devices and wearables in self-health measurement are creating a favorable market outlook. Apart from this, people are becoming aware of self-health management due to increasing disease conditions in the early stage of life, which is further catalyzing the demand for wearable medical devices. IOT-based wearables help in monitoring of remote patients with chronic illnesses, which, in turn, is stimulating the growth of the market. On account of the aforementioned factors, the market is anticipated to reach a value of US$ 687.5 Billion by 2028, exhibiting a CAGR of 16.4% during 2023-2028.
- On the basis of the component, the market has been segmented into medical devices (wearable external devices, implanted medical devices, and stationary medical devices), system and software (remote device management, network bandwidth management, data analytics, application security, and network security), and services (system integration services, consulting, training and education, and support and maintenance services). At present, services (system integration services, consulting, training and education, and support and maintenance services) account for the majority of the total market share.
- Based on the connectivity technology, the market has been classified into Wi-Fi, Bluetooth low energy, ZigBee, near field communication (NFC), cellular, and satellite. Cellular holds the largest market share.
- On the basis of the application, the market has been categorized into telemedicine, medication management, clinical operations, inpatient monitoring, connected imaging, and others. Presently, telemedicine dominates the market.
- Based on the end user, the market has been classified into hospitals and clinics, clinical research organizations, research and diagnostic laboratories, and others. Hospitals and clinics exhibits a clear dominance in the market.
- Region-wise, the market has been divided into North America (the United States and Canada), Asia Pacific (China, Japan, India, South Korea, Australia, Indonesia, and others), Europe (Germany, France, the United Kingdom, Italy, Spain, Russia, and others), Latin America (Brazil, Mexico, and others) and the Middle East and Africa. Amongst these, North America (the United States and Canada) enjoys the leading position in the market.
- The competitive landscape of the market has been studied in the report with the detailed profiles of the key players operating in the market. Some of the key players include Apple Inc., Cisco Systems Inc., Comarch SA, HQSoftware, Intel Corporation, Koninklijke Philips N.V., Medtronic plc, Microsoft Corporation, Oracle Corporation, OSP Labs, Oxagile, Qualcomm Incorporated, Siemens AG and STANLEY Healthcare.
|Base Year of the Analysis
||Component, Connectivity Technology, Application, End User, Region
|| Asia Pacific, Europe, North America, Latin America, Middle East and Africa
||United States, Canada, Germany, France, United Kingdom, Italy, Spain, Russia, China, Japan, India, South Korea, Australia, Indonesia, Brazil, Mexico
||Apple Inc., Cisco Systems Inc., Comarch SA, HQSoftware, Intel Corporation, Koninklijke Philips N.V., Medtronic plc, Microsoft Corporation, Oracle Corporation, OSP Labs, Oxagile, Qualcomm Incorporated, Siemens AG and STANLEY Healthcare.
||10% Free Customization
|Report Price and Purchase Option
||Single User License: US$ 2499
Five User License: US$ 3499
Corporate License: US$ 4499
|Post-Sale Analyst Support
||PDF and Excel through Email (We can also provide the editable version of the report in PPT/Word format on special request)
IMARC Group is a leading market research company that offers management strategy and market research worldwide. We partner with clients in all sectors and regions to identify their highest-value opportunities, address their most critical challenges, and transform their businesses.
IMARC's information products include major market, scientific, economic and technological developments for business leaders in pharmaceutical, industrial, and high technology organizations. Market forecasts and industry analysis for biotechnology, advanced materials, pharmaceuticals, food and beverage, travel and tourism, nanotechnology and novel processing methods are at the top of the company's expertise.
134 N 4th St.
Brooklyn, NY 11249, USA
Americas:- _PHONE_ | Africa and Europe :- +44-_PHONE_ | Asia: +91-_PHONE_, +91-_PHONE_ | 1.881248 | m-a-p/FineFineWeb |
Location: Crop Improvement and Protection Research2016 Annual Report
1a. Objectives (from AD-416):
Objective 1: Identify specific genes associated with Beet necrotic yellow vein virus infection of sugarbeet that contribute to development of rhizomania disease and the ability of the virus to overcome resistance for use as potential targets for induced resistance. This will involve comparisons with other soil-borne pathogens using in-house funds. Completion within 5 years. Objective 2: Determine environmental and epidemiological factors contributing to the ability of sugarbeet and vegetable viruses to emerge and establish over competing viruses, to provide effective disease management recommendations and prolong the durability of resistance sources. Specifically: 2.A. Determine the effect of variation among Polymyxa betae isolates on prevalence and dominance of soil-borne viruses affecting sugarbeet, including evaluation of virus competitiveness through collaborative studies involving this project using both in-house funds and those of a local collaborator in NP308. Completion within 5 years. 2.B. Assess accumulation of CYSDV in different host plants in relation to transmission and in development of host resistance using both in-house funds and collaboration with ARS Salinas vegetable breeding program (NP301). Completion within 5 years. 2.C. Identification of factors influencing emergence and dominance of existing and new curtoviruses in North America through analysis of competitive virus accumulation in host plants. Research will involve in-house funds, with completion within 3 years. Objective 3: Determine environmental and cultural factors contributing to the ability of viruses to induce disease to facilitate breeding efforts for resistance to soil-borne and insect-transmitted viruses affecting lettuce. Completion of both subobjectives within 5 years using both in-house funds and collaboration with Salinas vegetable project (NP301). 3.A. Develop methods for greenhouse-based evaluation of lettuce for resistance to soilborne tombusviruses through identification of environmental factors influencing disease development, and application of this knowledge to germplasm evaluation using controlled environments. 3.B. Identify sources of tospovirus resistance through evaluation of lettuce and Lactuca germplasm using mechanical transmission and viruliferous thrips under greenhouse conditions, for further development by breeders. Objective 4: Determine biological and ecological relationships among vectors and their host plants, the pathogens they transmit, and the environment, and develop novel intervention and management strategies for control of vector-borne diseases of vegetables, through the use of traditional, molecular biology, and bioinformatics approaches.
1b. Approach (from AD-416):
Objective 1: Some defense genes will be common to general sugarbeet or plant defense against pathogens. Determine similarities and differences among pathogens for gene expression between infected and pathogen free host plants based on results of studies currently concluding. Results will be compared with others including BNYVV and other pathogens through parallel studies. Objective 2: 2.A. Isolates of the plasmodiophorid vector of BNYVV, Polymyxa betae, differ for BNYVV transmission to sugarbeet. This is associated with increased presence of resistance-breaking forms of BNYVV. Single spore isolates of P. betae will be tested for differences in efficiency of BNYVV transmission and effect of vector isolate on virus competitiveness with virus titer. 2.B. Resistance from the exotic melon (Cucumis melo) accession PI 313970 can enhance resistance to CYSDV in cultivated melon, and provide high levels of resistance when combined with resistance source TGR-1551. Exotic melon accessions will be evaluated in replicated field plantings and studies will examine transmission efficiency of CYSDV from resistant and susceptible melons in comparison with virus concentration. 2.C. Individual curtoviruses accumulate to higher or lower titers during single and mixed infections, and this varies by host plant. This influences virus dominance in the field. Curtoviruses will be transmitted by beet leafhoppers from single and mixed infections with qPCR used for virus titer determination. If needed Agro-based delivery of virus isolates to specific hosts, or leafhopper membrane feeding studies could be used for virus delivery. Objective 3: 3.A. Long-day or high temperature treatment will induce development of tombusvirus symptoms on susceptible lettuce and can be used for selecting resistant and susceptible varieties. Growth chamber experiments will be used to determine optimal environmental conditions (light, temp, soil moisture etc.) for tombusvirus infection of lettuce using mechanical transmission experiments. Chamber and soil moisture and nutrition conditions can be modified as needed. 3.B. Resistance to Impatiens necrotic spot virus (INSV) and Tomato spotted wilt virus (TSWV) exists in wild or cultivated Lactuca germplasm and can be identified through greenhouse evaluation. Transmission of INSV and TSWV to Lactuca germplasm sources will be conducted using thrips vectors in the greenhouse. Virus detection will be performed using standard ELISA. If necessary, virus can be mechanically transmitted directly to lettuce from select hosts. Objective 4: Use a combination of bioinformatics analysis of insects with related applied and molecular entomological approaches to examine how insect vectors respond biologically and biochemically to environmental parameters. This will include but is not limited to the responses of whiteflies and leafhoppers to specific host plants, the presence or absence of plant viruses in host plants, and pesticides applied to host plants. Knowledge gained through these studies will be used to develop novel methods for vector population control through both biotechnology-based and genetics approaches.
3. Progress Report:
The Salinas lab is recognized internationally as a leader in the study of insect-transmitted and soil-borne viruses affecting sugar beet and vegetable production. The lab has characterized most of viruses in the genus Crinivirus as well as other viruses transmitted by insects and through soil. The lab has developed detection methods, and is actively working toward understanding the interacting epidemiological factors that drive emergence and establishment of these viruses. The lab works with breeding programs to improve and enhance the availability and performance of resistant varieties, and uses a wide range of applied, molecular, genomic, and technological approaches to address emergence, epidemiology, and control of sugar beet and vegetable viruses. A complex collaborative project is focused on control of whiteflies, a damaging insect pest of agricultural and horticultural crops throughout the world, and a vector of numerous plant viruses. Recently completed studies involving collaboration between the ARS Virology Lab in Salinas, California, as well as the ARS Virology Lab in Charleston, South Carolina, sequenced the genome of the whitefly, Bemisia tabaci. The Salinas Lab sequenced the transcriptome (RNA used for gene expression) of whitefly in response to transmission of two important and related whitefly-transmitted viruses, Tomato chlorosis virus (2015) and Cucurbit yellow stunting disorder virus (2016) and identified over 1000 and 250 differentially expressed genes, respectively, in response to virus infection of the source plant on which the whitefly fed. This information is being used to understand how these viruses interact with the whitefly vector and for development of RNA interference (RNAi) to specifically eliminate the whitefly vector (and not other insects) in vegetable crops and cassava, an important food staple in the developing world through collaborations in Africa. During the past year more than 30 RNAi constructs were tested for whitefly control by the ARS Virology Lab in Salinas, and 10 have been delivered to collaborators in Africa for evaluation for control of whitefly on cassava. Rhizomania disease of sugarbeet is caused by Beet necrotic yellow vein virus (BNYVV), and in the absence of viable resistance genes the virus can cause severe yield reductions. Studies conducted through this project have compared and are continuing to study efficiency of BNYVV transmission by different isolates of Polymyxa betae, the soil-borne organism that transmits the virus; and new studies are comparing transmissibility of different BNYVV isolates by specific isolates of the P. betae vector. This work is focused on determining differences in transmission of BNYVV by this soil-borne organism, and should eventually lead to development of control strategies. Previous studies by the ARS Virology Lab in Salinas identified changes in the proteome of sugarbeet in response to BNYVV infection of susceptible and resistant plants. Although this information was valuable, additional information is needed to understand how BNYVV causes rhizomania disease on sugarbeet in order to use new technologies to interfere with the infection process and develop resistance. To this end, new research, begun during 2016, is evaluating the metabolome of resistant and susceptible sugarbeet in response to BNYVV infection. Tomato necrotic dwarf virus (ToNDV), a virus that causes severe stunting and necrosis on tomato that had not been identified in the field in over 30 years. It was identified by the ARS virology lab in Salinas in weeds from Imperial County, California and in tomato from Kern County, California by University of California (UC), Davis collaborators. Previously only one isolate of ToNDV had been characterized (by the Salinas Lab). New studies demonstrated a close genetic relationship between the Kern isolate and isolates from Imperial County, collected in the 1980's, but much greater sequence divergence was found in the weed isolate from Imperial County. Surveys for the virus will continue in fall 2016 to determine if the virus is re-emerging and poses a significant risk to tomato production in California. New variants of Beet curly top virus (BCTV) have been emerging and replacing traditional strains of the virus in several locations and crops in | 2.12538 | openbmb/Ultra-FineWeb |
]^{-3}/aitalic_j ( italic_t ) = ( roman_d start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT italic_a / roman_d italic_t start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT ) [ italic_H ( italic_t ) ] start_POSTSUPERSCRIPT - 3 end_POSTSUPERSCRIPT / italic_a the jerk parameter, and so on. As provided, these relations ignore curvature but terms for a non-flat Universe can be included. Thus seen, the Hubble 'constant' is the present value of the expansion H=0H(t0){}_{0}=H(t_{0})start_FLOATSUBSCRIPT 0 end_FLOATSUBSCRIPT = italic_H ( italic_t start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ) today. The higher-order derivatives of expansion, q0subscript𝑞0q_{0}italic_q start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT and j0subscript𝑗0j_{0}italic_j start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT, can be directly determined from this relation, provided a set of distance and redshifts measurements covering a wide range of redshift is available (e.g., with Type Ia supernovae, SNe Ia, at 0<z<10𝑧10<z<10 < italic_z < 1 free of cosmological model assumptions). This approach, following the definition, is called the 'Direct' route, because it does not invoke a cosmological model and has negligible dependence on gravity.
The alternative route to knowledge of H0 comes from predicting its present value from fine calibration of the cosmological model, ('Vanilla') ΛΛ\Lambdaroman_Λ cold dark matter (ΛΛ\Lambdaroman_ΛCDM), measured from the cosmic microwave background (CMB). By invoking ΛΛ\Lambdaroman_ΛCDM we are referring to a description of the Universe as composed of the simplest dark matter (i.e., non-interacting, non-decaying, from a particle with only gravitational interactions), the simplest dark energy (i.e., a 'Cosmological Constant'), atoms, photons, and neutrinos (three species), without spatial curvature, and without any other cosmologically important features (hence 'Vanilla'). We can use the model as it would have looked at z>1000𝑧1000z>1000italic_z > 1000 to predict the physical size of fluctuations in the primordial plasma and compare the fluctuation spectrum with their angular size as observed from the CMB. This comparison serves to calibrate the six free parameters in ΛΛ\Lambdaroman_ΛCDM. Once calibrated, the model predicts that dramatic changes in the Universe will occur (matter dominated followed by vacuum energy dominated) and describes the expansion history, H(z)𝐻𝑧H(z)italic_H ( italic_z ), from z=1000𝑧1000z=1000italic_z = 1000 to z=0𝑧0z=0italic_z = 0, and hence the value of H0. In the appendix we provide a detailed description of how, in practice, the value of H0 is measured from the CMB. We encourage the reader to review it, as it may challenge the prior belief that this is simple! See Kamionkowski and Riess (2022) and Planck Collaboration et al. (2020) for details. As should be clear, ΛΛ\Lambdaroman_ΛCDM is a phenomenological model with parameters that stand in for a physical description of the unknown nature of 96% of the present Universe. Thus, H0 is uniquely suited to provide an 'end-to-end' test of ΛΛ\Lambdaroman_ΛCDM and our understanding of the Universe. By comparing the direct and model-dependent routes, we can test the model. Figure 1 illustrates this test.
Refer to caption
Figure 1: The expansion rate of the Universe can be predicted by the ΛΛ\Lambdaroman_ΛCDM model with its parameters calibrated by the CMB or measured directly and locally from redshifts and distances.
2 The Distance Ladder: From Geometry to Cepheids to Type Ia Supernovae
Because the indirect route offers better than percent-level precision, fully leveraging the comparison demands a percent-level local measurement. The first generation of such measurements from the Hubble Space Telescope (HST) Key Project (KP; see Freedman and Madore, 2023, in this conference), foundational work of immense importance, reached 10% precision by 2001 (see also Sandage et al., 2006). To reach greater precision requires a considerable redesign of the approach and methods to reach percent-level while at the same time leveraging new geometric measurements (e.g., the ESA Gaia mission parallaxes). In 2005 we started the SH0ES program to use new instruments on the HST to accomplish this. The SH0ES ladder is composed of three components or "rungs": Geometry to Cepheids to Type Ia supernovae. The criticial, new elements of SH0ES are:
- Cancel flux calibration errors using HST to measure all Cepheids (rung 1 and 2);
- Observe all Cepheids in the Near-Infrared (NIR) to minimize dust;
- Use best quality SN data, consistently calibrated (Pantheon+, Brout et al., 2022);
- Comprehensive error analysis, include covariance, analyze plausible variants;
- Publicly release data, 107superscript10710^{7}10 start_POSTSUPERSCRIPT 7 end_POSTSUPERSCRIPT data numbers, and code to fit the data.
2.1 Geometric Rung
There are three geometric anchors of the distance ladder which rely on different systems and measurements and so are fully independent of each other. Milky Way Cepheid parallaxes, thanks to Gaia (now in Data Release 3) provide similar-to\sim 1% precision in the calibration of the Hubble constant. These are a tremendous advancement over past work. However, we could not leverage the geometric precision these afford without an equally precise photometric tie. The SH0ES project has spent a lot of effort to developing a spatial scanning method for photometric measurements of ultra bright targets with HST that demonstrates extreme precision and accuracy for direct calibration of Cepheids on the second rung (in SN Ia hosts). These scans were also previously used to measure parallaxes (before Gaia). More recently, even greater precision and independence from Gaia systematics (the parallax offset term) has come from Cepheids in open star clusters, where hundreds of stellar parallaxes may be averaged (Riess et al., 2022a). Figure 2 shows three different approaches to measuring Cepheid parallaxes, all in good agreement.
Refer to caption
Figure 2: The Milky Way Cepheid period–luminosity relation in the HST NIR, reddening-free (Wesenheit) system as calibrated with three samples. Parallaxes from HST spatial scanning (Riess et al., 2018) for 8 Cepheids are in blue and yield 3% precision. The 68 points in gray (Riess et al., 2021) using Gaia early Data Release 3 (EDR3) parallaxes with simultaneous calibration of the parallax offset. The red points come from cluster Cepheids and do not require parallax offset calibration as they are measured in the range where Gaia is best calibrated (Riess et al., 2022a). These samples differ in their parallax precision (inset) leading to the low dispersion of σ=0.07𝜎0.07\sigma=0.07italic_σ = 0.07 mag for the cluster Cepheids.
Another anchor comes from the exquisite detached-eclipsing binary measurements in the Large Magellanic Cloud (LMC; Pietrzyński et al., 2019) which reach 1.2% precision. Again, leveraging these requires direct measurements of Cepheids with HST. These measurements (Riess et al., 2019) yield both greater precision and accuracy, especially in the NIR, than those from the ground (see past ground data in Freedman and Madore, 2023, in this conference) with a dispersion of similar-to\sim0.07 mag. They also demonstrate the power of NIR data to mitigate dust and the use of colors to deredden for remaining dust. These are shown in Figure 3.
Refer to caption
Figure 3: Period-mean magnitude relation for the 70 LMC Cepheids observed with HST using DASH mode.
Refer to caption
Figure 4: Comparison of Cepheids measured in a dense (inner) field (in red) and sparse (outer) field (in blue) of NGC 4258. Because these Cepheids are at the same distance, the comparison shows the accuracy of the background estimates, which differ in the mean over the same sampled range, 0.7<logP<1.20.7𝑃1.20.7<\log P<1.20.7 < roman_ | 1.414127 | EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample |
by this locator111 @params 112 locator - one of: String Locator instance name113 String locator definition (using By.CSS_SELECTOR)114 tuple locator containing (value, locator type)115 @returns116 webelement or list of webelements117 """118 if type(locator) == str:119 # a string definition key was passed in120 if locator in self.locators.keys():121 # we have a definition for this locator key122 #return self.driver.find_elements(getattr(By, self.locators[locator].by), self.locators[locator].locator)123 return self.locators[locator].find()124 else:125 # default to By.CSS_SELECTOR if we don't recognize it126 return self.find(('CSS_SELECTOR', locator))127 else:128 # a tuple definition was passed in129 return self.driver.find_elements(getattr(By, obj[0]), obj[1])130 def find_child(self, webelement, locator):131 """132 Find exactly one instance that is a child of the given webelement, identified by this locator. 133 If multiple elements are found, returns the first in the list.134 @params135 locator - one of: String Locator instance name136 String locator definition (using By.CSS_SELECTOR)137 tuple locator containing (value, locator type)138 @returns139 webelement identified by this locator.140 """141 results = self.find_children(webelement, locator)142 if len(results) == 0:143 raise Exception('Locator '%s' not found' % locator)144 return results[0]145 def find_children(self, webelement, locator):146 """147 Find all instances that are children of a given webelement, identified by this locator.148 @params 149 locator - one of: String Locator instance name150 String locator definition (using By.CSS_SELECTOR)151 tuple locator containing (value, locator type)152 @returns153 webelement or list of webelements154 """155 if type(locator) == str:156 # a string definition key was passed in157 if locator in self.locators.keys():158 # we have a definition for this locator key159 return webelement.find_elements(getattr(By, self.locators[locator].by), self.locators[locator].locator)160 else:161 # default to By.CSS_SELECTOR if we don't recognize it162 return self.find_children(webelement, ('CSS_SELECTOR', locator))163 else:164 # a tuple definition was passed in165 return webelement.find_elements(getattr(By, obj[0]), obj[1])166 def get_locator(self, name):167 """168 Return the raw locator value of a given Locator name169 @params170 name - name of the Locator instance171 @returns172 String value of the locator, regardless of locator type.173 """...
Full Screen
Full Screen
Playwright tutorial
LambdaTest's Playwright tutorial will give you a broader idea about the Playwright automation framework, its unique features, and use cases with examples to exceed your understanding of Playwright testing. This tutorial will give A to Z guidance, from installing the Playwright framework to some best practices and advanced concepts.
Chapters:
1. What is Playwright : Playwright is comparatively new but has gained good popularity. Get to know some history of the Playwright with some interesting facts connected with it.
2. How To Install Playwright : Learn in detail about what basic configuration and dependencies are required for installing Playwright and run a test. Get a step-by-step direction for installing the Playwright automation framework.
3. Playwright Futuristic Features: Launched in 2020, Playwright gained huge popularity quickly because of some obliging features such as Playwright Test Generator and Inspector, Playwright Reporter, Playwright auto-waiting mechanism and etc. Read up on those features to master Playwright testing.
4. What is Component Testing: Component testing in Playwright is a unique feature that allows a tester to test a single component of a web application without integrating them with other elements. Learn how to perform Component testing on the Playwright automation framework.
5. Inputs And Buttons In Playwright: Every website has Input boxes and buttons; learn about testing inputs and buttons with different scenarios and examples.
6. Functions and Selectors in Playwright: Learn how to launch the Chromium browser with Playwright. Also, gain a better understanding of some important functions like "BrowserContext," which allows you to run multiple browser sessions, and "newPage" which interacts with a page.
7. Handling Alerts and Dropdowns in Playwright : Playwright interact with different types of alerts and pop-ups, such as simple, confirmation, and prompt, and different types of dropdowns, such as single selector and multi-selector get your hands-on with handling alerts and dropdown in Playright testing.
8. Playwright vs Puppeteer: Get to know about the difference between two testing frameworks and how they are different than one another, which browsers they support, and what features they provide.
9. Run Playwright Tests on LambdaTest: Playwright testing with LambdaTest leverages test performance to the utmost. You can run multiple Playwright tests in Parallel with the LammbdaTest test cloud. Get a step-by-step guide to run your Playwright test on the LambdaTest platform.
10. Playwright Python Tutorial: Playwright automation framework support all major languages such as Python, JavaScript, TypeScript, .NET and etc. However, there are various advantages to Python end-to-end testing with Playwright because of its versatile utility. Get the hang of Playwright python testing with this chapter.
11. Playwright End To End Testing Tutorial: Get your hands on with Playwright end-to-end testing and learn to use some exciting features such as TraceViewer, Debugging, Networking, Component testing, Visual testing, and many more.
12. Playwright Video Tutorial: Watch the video tutorials on Playwright testing from experts and get a consecutive in-depth explanation of Playwright automation testing.
Run Playwright Python automation tests on LambdaTest cloud grid
Perform automation testing on 3000+ real desktop and mobile devices online.
Try LambdaTest Now !!
Get 100 minutes of automation test minutes FREE!!
Next-Gen App & Browser Testing Cloud
Was this article helpful?
Helpful
NotHelpful | 1.368035 | EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample |
Students will be able to identify fractions
based on visual representations
as well as construct a visual representation
of a fraction
This is a 3 part worksheet.
Part I. Write a fraction based on its visual representation (worksheet goes hand in hand with online powepoint)
Part II. Mixed practice questions based on various visual models of fractions including vertical and horizontal bars, jars, and pizzas.
Students are asked to shade in pictures to represent a given fraction | 2.519546 | HuggingFaceFW/fineweb-edu |
Where would you rather set foot: Mars or the moon? | The Tylt
Where would you rather set foot: Mars or the moon?
Only twelve people have set foot on the moon, so if you choose to jet set to Earth's only natural satellite, you will be in elite company. Not only would you be able to bounce around joyfully on the moon's surface (with one-sixth the gravitational pull of Earth), but you also have the chance to leave a permanent mark on this space rock. Among Neil Armstrong's first footprint and 3.8 billion-year-old craters, you could write your own name in moon dust—some kid somewhere, right this moment, is imagining the epic prom-posal they could create by writing it on the moon, I'm sure of it. Nothing says magic quite like that.
Beyond collecting moon rocks to bring back to all of your friends and family (there is no better souvenir), you could also finally reveal to everyone what mysteries lie on the dark side of the moon.
When you're not searching for intelligent lifeforms, you could also help build the first space colony on Mars. Forget writing your name in glorified sand on the moon, if you venture to Mars, your name would be etched in history. | 1.533117 | m-a-p/FineFineWeb |
The counties Utility office was created to help solve the problem of polluted storm-water and to improve the drainage capability of areas that are susceptible to flooding. They also go toward reducing pollution caused by silt, oil, gasoline, fertilizers, pesticides and other litter carried by the storm-water to the drainage systems and canals that have been developed to prevent flooding during a heavy rainfall. Storm-water drains not only have to remove water from the streets as quickly as possible, but they also have to deal with all of the contaminants that get picked up along the way. Once the storm-water pollution reaches our waterways, it can be harmful to plants and animals. Large doses of organic pollution can cause fish and other marine life to suffocate from lack of dissolved oxygen (consumed in the process of decomposition), litter can injure unsuspecting fish, turtles, and even manatees and whales, and harsh chemicals (like automotive and cleaning fluids) can be toxic to marine life.
If your Parking Lot Drain, commonly known as either Storm Sewers or Catch Basins, is flooded, slow draining, full of leaves, trash or sand and interfering with your business, call Pump Outs Unlimited for a same day Free quote! | 2.098371 | m-a-p/FineFineWeb |
Black Trees Under Two Suns
Gliese 667 is one of two multiple star systems (the other being 55 CnC) known to host planets below 10 Earth masses. Image Credit: ESO/L. Calçada
A sky with two suns is a favorite image for science fiction films, but how would a binary star system affect life evolving on an orbiting planet?
Jack O'Malley-James of the University of St. Andrews has studied what plants might be like on an Earth-like planet with two or three suns and found that they may appear black or grey. He presented results at the RAS National Astronomy Meeting in Llandudnoon Tuesday 19th April.
Photosynthesis — converting sunlight into energy — is the basis for the majority of life on Earth. It is the energy source for plants and, hence, animals higher up the food chain. With multiple light sources, life may have adapted to use all suns, or different forms may develop that choose to use one specific sun. This may be the more likely option for planets on which parts of the surface are illuminated by only one sun for long periods of time.
"If a planet were found in a system with two or more stars, there would potentially be multiple sources of energy available to drive photosynthesis. The temperature of a star determines its color and, hence, the color of light used for photosynthesis. Depending on the colors of their star-light, plants would evolve very differently,"said O'Malley-James.
O'Malley-James is working on a PhD, supervised by Dr. Jane Greaves at the University of St. Andrews, Prof. John Raven of the University of Dundee and Prof. Charles Cockell of The Open University, to assess the potential for photosynthetic life in multi-star systems with different combinations of Sun-like stars and red dwarfs.
Black plants on a world with two suns. Credit: University of Saint Andrews
Sun-like stars are known to host exoplanets and red dwarfs are the most common type of star in our Galaxy, often found in multi-star systems, and old and stable enough for life to have evolved. Over 25% of Sun-like stars and 50% of red dwarfs are found in multi-star systems. In the team's simulations, the Earth-like planets either orbit two stars close together or orbit one of two widely separated stars. The team has also looked at combinations of these scenarios, with two close stars and one more distant star. | 2.467712 | Zyphra/Zyda-2 |
Cookies on this website
Amyotrophic lateral sclerosis (ALS) is a progressive and lethal disease of motor neuron degeneration, leading to paralysis of voluntary muscles and death by respiratory failure within five years of onset. Frontotemporal dementia (FTD) is characterised by degeneration of frontal and temporal lobes, leading to changes in personality, behaviour, and language, culminating in death within 5-10 years. Both of these diseases form a clinical, pathological, and genetic continuum of diseases, and this link has become clearer recently with the discovery of a hexanucleotide repeat expansion in the C9orf72 gene that causes the FTD/ALS spectrum, that is, c9FTD/ALS. Two basic mechanisms have been proposed as being potentially responsible for c9FTD/ALS: loss-of-function of the protein encoded by this gene (associated with aberrant DNA methylation) and gain of function through the formation of RNA foci or protein aggregates. These diseases currently lack any cure or effective treatment. Antisense oligonucleotides (ASOs) are modified nucleic acids that are able to silence targeted mRNAs or perform splice modulation, and the fact that they have proved efficient in repeat expansion diseases including myotonic dystrophy type 1 makes them ideal candidates for c9FTD/ALS therapy. Here, we discuss potential mechanisms and challenges for developing oligonucleotide-based therapy for c9FTD/ALS.
Original publication
Journal article
J Nucleic Acids
Publication Date | 1.985566 | Zyphra/Zyda-2 |
are under the control of the CCM cluster at the central site. Notice that even though call processing is centralized, DSP resources can be distributed. If network connectivity, such as IP WAN, exists between sites, it carries signaling messages to and from remote sites. Even if a device in a remote site calls another device within the same site, signaling trafc must go through the WAN connection. However, VoIP packets (not signaling) go through the WAN connection only for intersite calls. Usually, each site has a PSTN connection that serves two purposes: It allows the site to make outside calls, and it can act as an alternate route for when the WAN is down or is utilized to its limit. CAC is used to prohibit too many active intersite calls from hindering data communications or making the quality of calls drop. Administrators decide how many concurrent intersite calls over the WAN connection are viable and congure CAC to deny permission to any new calls over the WAN when the number of active intersite calls reaches that level. In those situations, a new intersite call can either fail (reorder tone or annunciator message), or it can be transparently rerouted through PSTN by means of automated alternate routing (AAR). If a remote site temporarily loses its WAN connection to the central site, rendering its IP phones useless, SRST is utilized on the gateway of that site. SRST is a feature available on Cisco gateways that allows the IP phones at the remote site to stay active (in the absence of a path to their CCM server) and be able to call each other within the site. SRST routes all calls through the PSTN when the WAN connection is down. Multisite with Distributed Call Processing Model In the multisite with distributed call processing model, each site has its own Cisco Unied CallManager cluster controlling all call processing aspects of that sitehence the term distributed call processing. Application servers and DSP resources are also distributed at all sites. Sites, in this case, do not depend on the call processing offered at another site. In distributed call processing, each site has a CallManager cluster. Please note that the other resources (voice mail, IPCC, IVR, DSP resources, etc.) can be centralized or distributed; while theyre normally distributed, they do not have to be. The WAN connection between the sites carries intersite data exchange, signaling, and VoIP packets. However, when a device calls another device within its own site, no trafc is sent over the WAN. CAC is still necessary to prohibit too many calls from going through the WAN connection. Each site has PSTN connectivity, which serves two purposes: it allows outside enterprise calls for each site, and it allows rerouting of intersite calls that cannot go through the WAN connection (either due to CAC denial or WAN outage).
48
Chapter 1: Cisco VoIP Implementations
This model is comparable to a legacy telephony model, where an enterprise would have a PBX system at each site and, using telco services, the enterprise would connect each pair of PBX systems at remote sites using tie-lines or trunks. In the distributed call processing model, an IP Telephony trunk must be congured between each pair of CallManager clusters (IP PBX) to make intersite calls possible. Examples of IP Telephony trunks that CCM supports are intercluster trunks, H.323 trunks, and SIP trunks. Clustering over WAN Model This model uses only one Cisco CallManager cluster for all sites. However, not all servers of the cluster are put in a single site together. Instead, the CCM servers, application servers, and DSP resources are distributed to different locations to provide local service to their clients (such as IP phones and gateways). The CCM servers need to communicate over the intersite IP WAN connection to perform database synchronization and replication. For clustering over WAN to work properly, the maximum round trip delay between each pair of servers within the cluster must be less than 40 ms. In this model, IP phones acquire services and are controlled by servers in the same site. IP WAN carries signaling and voice packets only for intersite calls. CAC is needed to control the number of calls utilizing the WAN connection. PSTN connection at each site is necessary for outside calls and for AAR purposes.
Identifying Voice Commands in IOS Congurations
Cisco routers that have proper interfaces can be congured to provide connectivity between analog or digital telephony devices over an IP network; they are called voice gateways in those circumstances. Figure 1-16 shows two voice gateways, R1 and R2, each with an analog phone connected to its FXS interface. To provide connectivity between the two phones over the IP network, in addition to basic congurations, each of the routers (gateways) needs one plain old telephone service (POTS) and one VoIP dial peer congured.
Figure 1-16
Two Sample Voice Gateways with Analog Phones Connected to Their FXS Interfaces
R1
1/1/1 FXS 192.168.1.1
R2 IP
192.168.2.2 2/0/0
FXS
Extension 11
Extension 22
A dial peer is a Cisco IOS conguration that links or binds a telephone number to a local POTS interface such as FXS or to a remote IP address; therefore, one POTS dial peer and one VoIP dial peer exist. The series of dial peers congured on a gateway together form its VoIP call routing table. The congurations of R1 and R2 shown in Example 1-1 and Example 1-2 take advantage of
Implementing VoIP Support in an Enterprise Network
49
the default VoIP signaling protocol on Cisco gateways (H.323). If the phone on R1 goes off-hook and, after receiving the dial tone, number 22 is dialed, R1 sends H.323 signaling (call setup) messages to the R2 IP address 192.168.2.2. After the message from R1 is received and processed, based on the dialed number 22, R2 sends a ring signal to interface 2/0/0 (the FXS port), and the phone on R2 rings.
Example 1-1
R1 VoIP Conguration
Dial-peer voice 1 pots destination-pattern 11 port 1/1/1 Dial-peer voice 2 voip destination-pattern 22 session target ipv4:192.168.2.2
Example 1-2
R2 VoIP Conguration
Dial-peer voice 1 pots destination-pattern 22 port 2/0/0 Dial-peer voice 2 voip destination-pattern 11 session target ipv4:192.168.1.1
Call Admission Control (CAC)
Call admission control is a feature that is congured to limit the number of concurrent calls. Usually, because the bandwidth of the WAN link is much less than LAN links, CAC is congured so that WAN bandwidth does not get oversubscribed by VoIP calls. CAC complements QoS congurations. For instance, if a strict priority queue with enough bandwidth for three voice calls is congured on all routers between two phones, although there are fewer than four concurrent calls, all will be good quality. What would happen if ten calls went active concurrently? If all the VoIP trafc packets (RTP) must share the strict priority queue that is provisioned with enough bandwidth for three calls, routers will drop many VoIP packets when there are ten active calls. The packets that will be dropped belong to any or all active calls, indiscriminately. It is wrong to believe that only packets associated to the calls beyond the third one will be dropped. As a result, all calls can and probably will experience packet drops and, naturally, poor call quality. When there are available and reserved resources for a certain number of concurrent calls, CAC must be congured so that no more calls than the limit can go active. QoS features such as classication, marking, congestion avoidance, congestion management, and so on provide priority services to voice packets (RTP) but do not prevent their volume from exceeding the limit; for that, you need CAC.
50
Chapter 1: Cisco VoIP Implementations
Foundation Summary
The Foundation Summary is a collection of information that provides a convenient review of many key concepts in this chapter. If you are already comfortable with the topics in this chapter, this summary can help you recall a few details. If you just read this chapter, this review should help solidify some key facts. If you are doing your nal preparation before the exam, the information in this section is a convenient way to review the day before the exam. Benets of packet telephony networks include usage of common infrastructure for voice and data, lower transmission costs, more efcient usage of bandwidth, higher employee productivity, and access to new communication devices. Main packet telephony components are phones, video end points, gateways, MCUs, application servers, gatekeepers, and call agents. Voice gateways can have analog interfaces such as FXS, FXO, and E&M; they may have digital interfaces such as BRI, CT1/PRI, or CE1/PRI. The main stages of a phone call are call setup, call maintenance, and call teardown. Call control has two main types: centralized call control and distributed call control. H.323 and SIP are examples of distributed VoIP call control protocol, whereas MGCP is an example of a centralized VoIP call control protocol. The steps involved in analog-to-digital | 1.187424 | EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample |
Electrochemical impedance spectroscopy (EIS) was utilized to delve into the corrosion inhibition of eutectic Cu-Ag alloy and its components (Cu and Ag) in an aqueous, aerated 0.1 M KNO3 solution. This alloy plays a major role in the water cooling of central processing units in data storage centers. Two organic inhibitors, namely, 1,2,3-benzotriazole (BTA) and 2,5-dimercapto-1,3,4-thiadiazole (DMTD), were utilized in this study. The corrosion inhibition slowly evolved over time as diagnosed by an increase in the charge transfer impedance and the gradual tendency of the Nyquist profiles to arc toward the real axis. This trend was attributed to the gradual formation of organometallic passivation layers. The EIS data underlined the specific affinity of BTA and DMTD toward the Cu and Ag surfaces, respectively. A transition of the double-layer equivalent circuit element from ideal capacitance to a constant phase element was observed for the alloy compared to the pure metals. This was attributed to the heterogeneity induced by Cu-rich and Ag-rich phases in the alloy and by the formed oxides/protective film on the alloy surface. The EIS study demonstrated that both BTA and DMTD can provide sufficient corrosion inhibition to Cu-60Ag alloy with DMTD being significantly more effective.
Temporal Aspects of Corrosion Inhibition on Copper, Silver, and Copper-Silver Alloys: An Electrochemical Impedance Spectroscopy Study
Hooman Rahmani, Neil Spinner, Efstathios I. Meletis; Temporal Aspects of Corrosion Inhibition on Copper, Silver, and Copper-Silver Alloys: An Electrochemical Impedance Spectroscopy Study. CORROSION 1 August 2023; 79 (8): 881–890. doi: _URL_
Download citation file: | 1.35197 | m-a-p/FineFineWeb |
10 Tips for Parents of Prospective College Students
1. Choosing a Career/Choosing a Major
Ultimately, your son or daughter should make the choice. Of course, you may want to mention factors to consider, such as job market demand, salary ranges, long-range opportunities, skills required, etc. Just because an occupation is "hot" now does not mean it will be equally in demand in 10 years or that your child has the aptitude or motivation for it.
2. Choosing to Double Major/Choosing a Major and Minor
Most employers do not place a premium on a double major. It usually requires an extra one or two semesters to obtain a second major and does not particularly enhance a student's marketability. Exceptions would be a second major or a major and minor chosen for a specific career, such as English and chemistry for technical writing, or a health policy major and business minor for hospital administration.
3. Grade Point Average (GPA)
Some students who get off to a rocky start eventually pull up their grades; however, this can be very difficult to do. Many employers stress the student's overall background: work experience, number of hours worked during the school year to finance college, leadership activities, etc. Encourage your son or daughter to make academics a high priority beginning with his or her freshman year. It is important to remember that it may take him or her a while to adjust to the rigorous academic demands of college.
4. Obtaining Marketable Skills
Most employers today put more emphasis on graduates' skills than on their academic majors. Encourage your son or daughter to develop strengths in at least two or three of the following areas:
- Computer skills (e.g., programming, word processing, spreadsheets, data base management, e-mail, Internet);
- Quantitative skills (e.g., accounting, statistics, economics);
- Communication skills (e.g., written and oral);
- Marketing/selling skills (e.g., sales, publicity, fundraising);
- Scientific skills (e.g., lab skills, scientific research);
- Foreign language skills (e.g., especially Spanish, Portuguese, Chinese, or Russian);
- Leadership skills (e.g., supervisory, extracurricular leadership roles, teamwork/team leader).
5. Leadership Activities
Many employers rate leadership activities even more highly than GPA. Students who were very active in high school activities may be less involved in college extracurricular activities. However, employers regard high school as "ancient history" for a college senior. It is more valuable for a student to be involved in a few meaningful leadership roles on campus than to be in a "laundry list" of many campus clubs.
You may want your son or daughter to work in his or her hometown every summer. However, the experience gained as a lifeguard or ice cream shop counter clerk does not compare to that which comes from an internship (paid or unpaid) in the career field that he or she aspires to enter. Future employers will seek graduates with relevant, real-world work experience. Some students have little to write about on a resume if their summers were spent in school, traveling, or working at low-level jobs. We strongly suggest that students seek career-related experience for their sophomore and junior summers even if they must live away from home or accept an unpaid internship. Students needing financial support can combine an unpaid internship with a paid job such as waiter/waitress, etc.
7. Graduating Early, Graduating Late
Some students graduate early through advanced placement credits, heavy course loads, and summer school courses. The advantages are lower educational expenses and the ability to start employment or graduate school earlier. The disadvantages may include the sacrifice of academic honors, work experience, and extracurricular and volunteer activities that may contribute to a student's maturity level and qualifications. Other students graduate late due to light course loads, academic difficulties, changing majors, poor academic advising, lack of direction, or reluctance to leave the cocoon of the college environment. Advantages to late graduation include the ability to improve grades with light class loads, extra time to change majors, the ability to take additional electives to improve marketability, and extra time to gain more career-related or leadership experience. Disadvantages to late graduation are increased college costs and possible disapproval of employers and graduate schools.
8. Planning for Graduate/Professional School
About 88 percent of the nation's college freshmen indicated in a recent survey that they plan to go to graduate or professional school, yet only about 24 percent do so within a year of completing their bachelor's degree. Students aspiring to graduate or professional school should: Be clear about the reasons they want to go on for further education; research the qualifications required for admission and be realistic about their chances of acceptance; and always have a "Plan B" or back-up plan in case they are not accepted. Students should discuss their interest in graduate or professional school well before their senior year with their academic adviser; the college's graduate or professional school adviser (e.g., the pre-law or pre-med adviser); and a college career adviser to obtain advice and guidance from three different perspectives.
9. Taking Time Off
Many students want to take time off after college graduation from college before attending graduate school or taking a career-related job. Future employers will want to know how the student has spent the intervening time. Do activities during this period demonstrate relevance to future career goals and/or a good work ethic? While short-term travel may be personally broadening, it does not increase a student's marketability to employers unless it is seen as career related. Therefore, the time off may result in a longer job search. For example, management trainee programs, which often begin shortly after graduation and hire large numbers of new graduates, may be filled by the time your child is ready to begin a job search.
10. Using the College Career Services Office
Students should begin using their campus career office in their freshman year. Virtually all career offices provide individual career counseling/advising, career planning workshops, internship assistance, and career fairs and programs. Your son or daughter should seek help early with choosing a career and preparing for it. Competition for good jobs, particularly in certain fields, is stiff. The career office can advise students about how to become a strong candidate for their field of interest.
Source: Article by Marcia B. Harris and Sharon L. Jones | 1.940351 | HuggingFaceFW/fineweb-edu |
A Regular Guy, Apple Corp., cultural hero, Dean Ornish, Edwin Catmull, Geist, Genius Bar, Great Man Theory, hagiography, Hebrew Bible, Hegel, Insanely Great, iPad, iPhone, iPod, Isaac Newton, Martha Graham, Mona Simpson, Muhammad, Pixar, religion, Steve Jobs, Steven Levy, Thomas Carlyle, Toy Story, Winston Churchill
Who, among those blessed with extra cash, doesn't remember their first Mac? Or first iPod? Or first iPhone? Or first iPad? Or, for that matter, their first visit to a sleek, modernist Apple store? Or first appointment at the Genius Bar?
Will Steve Jobs' death (on Oct. 5) restore us to agnosticism when it comes to electronic marvels? Many had become faithful converts to the power of high-tech. We had faith that each invention would be better than the last. Apple's product announcements had teleological force—we needed to wait only a little before another brilliant and stylish bit of Apple wizardry paradigm-shifted our lives—yet again. And we were justified in our faith. Revolutionary products did arrive. And life did change. For the better.
Surely, Jobs belongs on the shortlist of American, if not the world's, cultural heroes. Our grandchildren will learn of Jobs in their American history classes. In general, people are suckers for great men and women. Early historians understood that we are fascinated by great individuals; these historians did not so much write biographies as produce hagiographies, distorting what could be known about their subjects and adding details to make them appear less prone to human failings than they actually were. Among the sacred texts, the Hebrew Bible is one of the few that resists burnishing the lives it recounts. This is a strength of the Hebrew Bible; its authors understood that it is through their faults that we recognize great heroes as fellow human beings.
A close friend of Steve Jobs, Dr. Dean Ornish, understood this too, saying, Steve "was very human… He was so much more of a real person than most people know. That's what made him so great." Jobs was imperfect like most of us schmoes. His sister, Mona Simpson, wrote a "fictional" novel, A Regular Guy, whose main character bears many similarities to her iconic brother. Reviewers of the book noted that it was not an unalloyed portrait. Even his worst enemy, however, cannot deny that Jobs was blessed with unusual leadership and vision.
He belongs, then, on that list of individuals that the 19th Century Scottish writer, Thomas Carlyle, used to illustrate his "great man" theory. This theory views Western history as the playground of men and women who, thanks to their genius-level scientific or artistic talents, or beyond-brilliant military and leadership instincts, or ground-breaking philosophical or spiritual gifts have impacted millions, even billions of lives over the course of their own generations and beyond. Carlyle speculated that history could be explained by the actions of these "greats." He wrote, "The soul of the whole world's history, it may justly be considered, were the history of these." Their extra-ordinary attributes, like "the light which enlightens" is not "a kindled lamp only" but rather "a natural luminary shining by the gift of Heaven."
The author (Steven Levy) of the 1994 book, Insanely Great, chronicling the birth of the Mac, described the light cast by Jobs: "He was the most passionate leader one could hope for, a motivating force without parallel." A co-founder of Pixar (Edwin Catmull) commented that over the course of the four years during which his company struggled to make "Toy Story," Jobs never flagged in his determination: "You need a lot more than vision — you need a stubbornness, tenacity, belief and patience to stay the course…In Steve's case, he pushes right to the edge, to try to make the next big step forward."
These traits—Jobs' vision, stubbornness, tenacity, belief, and patience to stay the course, pushing right to the edge, driven to make the next big step—were surely shared by other "great men and women," like Winston Churchill or Muhammad or Isaac Newton or Martha Graham, all of whom excelled in the face of outrageous odds and legions of naysayers.
Carlyle also held that the thoughts of "great men and women" were "the parents of the actions they did; their feelings were parents of their thoughts: it was the unseen and spiritual in them that determined the outward and actual." Religion was not, for Carlyle, defined by creeds or by the houses of worship to which they belonged. Religion meant, rather, that which these great men or women believed, that they kept close to their hearts, that was "in all cases the primary thing" determining their practical actions. If one adopts Carlyle's definition, then the "chief fact" about Jobs, his "primary thing," his religion, was this: "Stay hungry, stay foolish," and "don't let the noise of others' opinions drown out your own inner voice. And most important, have the courage to follow your heart and intuition."
A contemporary of Carlyle, the German philosopher, Hegel, embraced a similar view of the role of superlative individuals in history. But for him, great people served as vehicles for the progressive unfolding of God-Spirit, or Geist in the world. Heroes, he wrote, are not agents who act independently of the Whole; rather, they serve as agents for Geist in moving history forward. This movement, according to Hegel, is inevitable.
Indeed, there will be those who—out of a personal dislike for Jobs, or because they are strongly attached to the notion of equality and thus resist recognizing that some human beings make greater contributions than others—will opine in Hegelian mode that if Jobs hadn't brought forth an abundance of culture-changing gadgets, someone else would have. Or they will turn to the common 20th Century position that we are all products of our social space and that the contributions of all "great men and women" would have been impossible without the prior existence of this space.
But the fact that it could have been some other individual produced by our current social space, actually underscores the truth that, regardless of possible competitors, Jobs was the one, the singular channel.
Goodnight sweet prince of tech. We'll miss you lots. We miss you already.
Resource: Carlyle, Thomas (Author). On Heroes, Hero Worship and the Heroic in History. London,, GBR: ElecBook, 2001. | 1.145974 | HuggingFaceFW/fineweb-edu |
7:1 dissuasionCase Study #2Mark, 29 years old, and Ellen, 27 years old, have been married for 2 years. When they first met, he treated her like a queen. He was kind and loving and "idolized" her. He would show glimpses of jealousy and controlling behavior, but she did not pay much attention to it. After they got married, his jealousy and controlling behavior grew worse. He treated her wonderfully when she was with him at home and engaging in behavior that made him feel loved and needed. However, he was controlling whenever she left the house to run errands or see her friends. When she returned home, he would question her in passive-aggressive ways and sometimes be outright accusatory. She would naturally become defensive and withdraw emotionally. That seemed to make things worse, and he would start to become verbally abusive about her not meeting his needs, not loving him, and not wanting to be with him. These verbal explosions were often accompanied by demeaning language. After these outbursts, Ellen felt like leaving him and even expressed this at times. Most of the time Mark would apologize profusely, then bring her flowers or gifts and rationalize his behavior in some way or promise he would change. Things would then get better for a while, and he would engage in "idolizing" behavior. However, the cycle of controlling behavior would just begin again.Identify the case study you selected. Explain what features of borderline personality disorder the primary character exhibits. Explain how the concept of splitting is demonstrated, and describe the role that empathy plays in the splitting. Explain challenges a forensic psychology professional might have when working with individuals with borderline personality disorder.Ackley, C., Mack, S., Beyer, K., & Erdberg, P. (2010). Investigative and forensic interviewing: A personality-focused approach. Boca Raton, FL: CRC Press.Chapter 3, "The Antisocial Personality" (pp. 43–60)Chapter 4, "The Psychopathic Personality" (pp. 61–93)Main Discussion Posting ContentExcellent – above expectationsPoints Range:21.6 (54%) – 24 (60%)Discussion posting demonstrates an excellent understanding of all of the concepts and key points presented in the text/s and Learning Resources. Posting provides significant detail including multiple relevant examples, evidence from the readings and other scholarly sources, and discerning ideas.Points Range:19.2 (48%) – 21.57 (53.92%)Discussion posting demonstrates a good understanding of most of the concepts and key points presented in the text/s and Learning Resources. Posting provides moderate detail (including at least one pertinent example), evidence from the readings and other scholarly sources, and discerning ideas.Points Range:16.8 (42%) – 19.17 (47.93%)Discussion posting demonstrates a fair understanding of the concepts and key points as presented in the text/s and Learning Resources. Posting may be lacking or incorrect in some area, or in detail and specificity, and/or may not include sufficient pertinent examples or provide sufficient evidence from the readings.Points Range:0 (0%) – 16.77 (41.93%)Discussion posting demonstrates poor or no understanding of the concepts and key points of the text/s and Learning Resources. Posting is incorrect and/or shallow and/or does not include any pertinent examples or provide sufficient evidence from the readings.Reply Post & Peer InteractionPoints Range:7.2 (18%) – 8 (20%)Student interacts frequently with peers. The feedback postings and responses to questions are excellent and fully contribute to the quality of interaction by offering constructive critique, suggestions, in-depth questions, use of scholarly, empirical resources, and stimulating thoughts and/or probes.Points Range:6.4 (16%) – 7.16 (17.9%)Student interacts moderately with peers. The feedback postings and responses to questions are good, but may not fully contribute to the quality of interaction by offering constructive critique, suggestions, in-depth questions, use of scholarly, empirical resources, and stimulating thoughts and/or probes.Points Range:5.6 (14%) – 6.36 (15.9%)Student interacts minimally with peers or the feedback postings, and responses to questions only partially contribute to the quality of interaction by offering insufficient constructive critique or suggestions, shallow questions, or providing poor quality additional resources.Points Range:0 (0%) – 5.56 (13.9%)Student does not interact with peers (0 points) or the feedback postings and responses to questions do not contribute to the quality of interaction by offering any constructive critique, suggestions, questions, or additional resources.WritingPoints Range:7.2 (18%) – 8 (20%)Postings are well organized, use scholarly tone, contain original writing, proper paraphrasing, follow APA style, contain very few or no writing and/or spelling errors, and are fully consistent with graduate level. | 1.342046 | openbmb/Ultra-FineWeb |
Chemicals became an integral part of daily life. Used for making the life easier, however in case of improper usage chemicals can be injurious. Hazardous material is any substance that threatens people, property or environment. Hazard materials can be harmful in different occurrences: during production transportation, storage or usage. The consequences of improper usage of hazardous materials can be different: from serious injuries, diseases or deaths to damage to buildings and other property. Using and storing hazardous materials at home became an ordinary thing.
A huge quantity of hazardous materials is used and stored in most of the country's facilities from factories to farms. Hazardous materials may be radioactive, flammable, explosive, and toxic or may have other hazardous characteristics. Storage, usage or production of hazardous materials is inadmissible without knowledge and observance of safety measures.
The presence of a first aid kit is obligatory, but in case of dealing with hazardous materials, an ordinary first aid kit will not be enough. For the purpose of protecting from the threat of hazardous materials, a first aid kit
for hazard materials protection must be acquired. This kit contains the necessary supplies for reduction of injuries, caused by hazardous materials.
How can I protect myself from a hazardous materials incident? | 2.09509 | HuggingFaceFW/fineweb-edu |
Written and directed by David Engelbach, America 3000 takes place in Colorado, USA in the year 3000. It turns out a nuclear war occurred in 1992 and that has reduced humanity back to a stone age standard of living under the rule of a fierce female tribe. All men are either living in the wild or slaves! When the leader of the dominant female clan dies, there is a dispute over her vacated position; two sisters vie for the honour and are challenged by the leader of a another clan.
As the story develops we meet two young men called Korvis (Chuck Wagner) and Gruss (William Wallace) who escape their city boundaries and explore the area. During their exploration they find a bunker that once belonged to the President of the United States of America.
In the bunker they find a wide range of guns and ammunition and decide to use these to start a revolution!
Categories: Cult Movie Essentials | 1.311438 | openbmb/Ultra-FineWeb |
Poly(vinylidene fluoride) gels were formed in y-butyrolactone; the critical polymer concentration for gel formation was 4.5 g per 100 cm 3. Gelation was caused by liquid-liquid phase separation, even if solid-liquid phase separation by crystallization occurred in the late stage of gelation, as shown by the formation of transparent gel, a relatively small enthalpy of gel formation and the existence of two types of crystal structures. In the dried gel films many spherulites connected by tie molecules were observed by scanning electron microscopy; the existence of such tie molecules is likely to make the formation of the gel films possible.
Посилання на статтю:
Thermoreversible gelation of poly(vinylidene fluoride) in -butyrolactone solution / Jae Whan Cho* and Ha Yool Song and Sang Yong Kim // Polymer. – 1993. – Vol 34. – P. _PHONE_. | 1.441956 | openbmb/Ultra-FineWeb |
Palaeolithic compared to a maximum of 30 tools in the Mousterian.
Chronology of Palaeolithic and following periods[change | edit source]
The Palaeolithic is sometimes divided into three (somewhat overlapping) periods which mark technological and cultural advances in different human communities:
Overview of the main features of these periods[change | edit source]
|Stone age||Palaeolithic||Tools: sharpened flint or stone tools: hand axes, scrapers, wooden spears||Hunting and gathering||Mobile lifestyle – caves, huts, tooth or skin hovels, mostly by rivers and lakes||Tribes of plant gatherers and hunters (25-100 people)||Evidence for belief in the afterlife in the Upper Palaeolithic: appearance of burial rituals and ancestor worship. Priests and sanctuary servants appear in prehistory.|
|Mesolithic (known as the Epipalaeolithic in areas with no trend towards agricultural lifestyles)||Fine small tools: bow and arrow, harpoons, fish-basket, boats||Tribes and Bands|
|Neolithic||Tools: chisel, hoe, plough, reaping-hook, grain pourer, barley, loom, pottery and weapons||Agriculture, Hunting and gathering, fishing and domestication||Farmsteads during the Neolithic and the Bronze Age Formation of cities during the Bronze Age||Tribes and chiefdoms in some Neolithic societies at the end of the Neolithic. States and civilisations during the Bronze Age.|
|Bronze Age||Writing; copper and bronze tools, potter's wheel||Agriculture; cattle-breeding; crafts, trade|
|Iron Age||Iron tools|
Venus figurines[change | edit source]
Possibly among the earliest traces of art are Venus figurines. These are figurines (very small statues) of women, mostly pregnant with visible breasts. The figurines were found in areas of Western Europe to Siberia. Most are between 20,000 and 30,000 years old. Two figurines have been found that are much older: the Venus of Tan-Tan, dated to 300,000 to 500,000 years ago was found in Morocco. The Venus of Berekhat Ram was found on the Golan Heights. It has been dated to 200,000 to 300,000 years ago. It may be the one of the earliest things that show the human form.
Today it is not known what the figurines meant to the people who made them. There are two basic theories:
- They may be representations of human fertility, or they may have been made to help it.
- They may represent (fertility) goddesses.
Scientists have excluded that these figurines were linked to the fertility of fields, because agriculture had not been discovered at the time the figurines were made.
The two figurines that are older may have mostly formed by natural processes. The Venus of Tan-Tan was covered with a substance that could have been some kind of paint. The substance contained traces of iron and manganese. The figurine of Berekhat Ram shows traces that someone worked on it with a tool. A study done in 1997 states that these traces could not have been left by nature alone.
Cave paintings[change | edit source]
Cave paintings are paintings that were made on the walls or roofs of caves. Many cave paintings belong to the Palaeolothic Age, and date from about 15,000 to 30,000 years ago. Among the most famous are those in the caves of Altamira in Spain and Lascaux in France.p545 There are about 350 caves in Europe where cave paintings have been found. Usually, animals have been painted, like aurochs, bisons or horses. Why these paintings were done is not known. They are not simply decorations of places where people lived. The caves they were found in usually do not show signs that someone lived in them.
One of the oldest caves is that of Chauvet in France. Paintings in the cave fall into two groups. One has been dated to around 30,000 to 33,000 years ago, the other to 26,000 or 27,000 years ago.p546 The oldest known cave paintings, based on radiocarbon dating of "black from drawings, from torch marks and from the floors". As of 1999, the dates of 31 samples from the cave have been reported. The oldest paintings have been dated from 32,900±490 years ago.
Some archaeologists have questioned the dating. Züchner believe the two groups date from 23,000–24,000, and 10,000–18,000 years ago. Pettitt and Bahn believe the dating is inconsistent. They say the people at that periods of time painted things differently. They also do not know where the charcoal used to paint some things is from, and how big the painted area is.
People from the Palaeolithic era drew well. They knew about perspective, and they knew of different ways to draw things. They also were able to observe the behaviour of animals they painted. Some of the paintings show how the painted animals behaved. The paintings may have been important for rituals.
Footnotes[change | edit source]
- Ancient Greek: palaios = old; and lithos = stone. Coined by John Lubbock in 1865.
- Nicholas Toth and Kathy Schick (2007). Handbook of Paleoanthropology. Springer. pp. 1963. ISBN 978-3-540-32474-4 (Print) 978-3-540-33761-4 (Online). _URL_
- Klein, Richard G. 2009. The human career: human biological and cultural origins. 3rd ed, Chicago.
- Hosfield R.T., Wenban-Smith F.F. & Pope M.I. 2009. Great prehistorians: 150 years of Palaeolithic research, 1859–2009. Lithics 30.
- Grolier Incorporated (1989). The Encyclopedia Americana. University of Michigan: Grolier Incorporated. p. 542. ISBN ISBN 0-7172-0120-1.
- Mesolithic Period. 2008. In Encyclopædia Britannica. Retrieved April 10, 2008, from Encyclopædia Britannica Online.
- "Stone Age," Microsoft® Encarta® Online Encyclopedia 2007 © _PHONE_ Microsoft Corporation. Contributed by Kathy Schick and Nicholas Toth
- Grolier Incorporated (1989). The Encyclopedia Americana. University of Michigan: Grolier Incorporated. p. 542. ISBN ISBN 0-7172-0120-1. _URL_
- McClellan (2006). Science and Technology in World History: An Introduction. Baltimore, Maryland: JHU Press. ISBN 0-8018-8360-1. _URL_ Page 6-12
- Napier, John. 1960. Fossil hand bones from Olduvai Gorge. Nature, December 17th.
- Vrba E. & Y.H.-Selassie 1994. African Homo erectus: old radiometric ages and young Oldowan assemblages in the middle Awash Valley, Ethiopia. Science 264: _PHONE_.
- known as the Hoxnian, the Mindel-Riss or the Holstein stage
- Smithsonian: Middle Stone Age tools.
- Miller, Barbra; Bernard Wood, Andrew Balansky, Julio Mercader, Melissa Panger (2006). Anthropology. Boston Massachusetts: Allyn and Bacon. pp. 768. ISBN _PHONE_. _URL_
- "'Oldest sculpture' found in Morocco". BBC News online. 23 May 2003. _URL_
- Alexander Marshack (1997). "The Berekhat Ram figurine: a late Acheulian carving from the Middle East" (pdf). _URL_
- Quotes from Clottes 2003b p214.
- Archaeologists sometimes use the phrase "B.P." (before the present day) to mean "years ago"
- Clottes 2003b p33. The oldest is sample Gifa 99776 from "zone 10". See also Chauvet (1996 p131, for a chronology of dates from various caves. Bahn's foreword and Clottes' epilogue to Chauvet 1996 discuss dating.
- Züchner, Christian (September 1998). "Grotte Chauvet Archaeologically Dated". Communication at the International Rock Art Congress IRAC '98. _URL_ Retrieved 2007-12-23.
Clottes (2003b), pp. 213-214, has a response by Clottes.
- Pettitt, Paul | 2.054439 | openbmb/Ultra-FineWeb |
.
- * Blood supply is branch of external carotid artery.
28. Trapezius muscle
- * Origin- external occipital protuberance on the occipital bone and from the bony ridges, the superior nuchal lines, which runs lateral from that. Also originates from spinous process of the cervical and thoracic vertebrae.
- * Insertion- into the spine of the scapula (back blade), the acromion process of the scapula, and the lateral 1/3 of the clavicle (collar bone).
- * Its function is to adduct (to move toward midline) and elevate the scapula(back blade) as well as slightly rotate it.
- * EXAMPLE: Shrugging of the Shoulder.
- * Typing at an improper height can cause pains in the trapezius from holding the arms in a raised position while doing work.
29. Migraine headaches
involuntarily contraction of the sternomastoid and trapezius muscles. Some malocclusions can also cause such spasms.
30. Temporomandibular joint(TMJ) Pain
the SCM and trapezius get nerve supply from the 2nd,3rd,4th cervical nerves, which are in close approximation to the lower part of the trigeminal nerve nucleus in the upper spinal chord. This can at times causes pain to emanate from the area of the TMJ. To find problem anesthetize sternomastoid and trapezius to see if problem disappears.
Card Set Information
Author:
eternity2130
ID:
178045
Filename:
111 Ch. 28
Updated:
2012-10-16 13:59:40
Tags:
28
Folders:
Description:
Den 111 Ch.28
Show Answers:
What would you like to do?
Home > Flashcards > Print Preview | 1.456096 | EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample |
unsuspecting swimmers and swallow them whole. This is very improbable, seeing as the adductor muscle moves way too slowly for the clam to catch a swimmer passing by and the clam would rather retreat into its shell than try sampling humans.
What is a starfish? Is it really a fish?
This one's a real starfish
These stars are Sea Stars!
Many a scientist are trying to replace the name of our dearest Starfish with Sea Star and the reason for this is that the starfish is not a fish but in fact an echinoderm. So we decided to help them out. Moving on, the Sea star can be found be found in oceans (there are no fresh water sea stars) the world over and have around 2000 species known to mankind. The most common of them are the five-pointed arm variety, and incidentally that's where they get their names. However there are sea stars with as many as 10, 20 or 40 arms. These invertebrates have no brains or blood and make do with filtered sea water for the latter; the former is not so easily substituted. They have bony, calcified skin that helps in keeping predators at bay. Their outstanding colours work as either camouflage or towards scaring off predators. The famed feature of the sea star is its ability to re-grow limbs and sometimes even entire bodies. This is possible because the sea star houses most of its vital organs in its arms. Though some need the central body to be intact in order to carry out the regeneration others can grow a brand new sea star just from a piece of severed limb. Another remarkable feature of the sea star is its ability to feed outside their bodies. They prise open clams or oysters using tiny suction-cupped tube feet and then with the help of the sac-like cardiac stomach, that emerges from their mouth and oozes into the shell, they encase the prey digesting it before withdrawing it back into the body.
What is a common octopus? And what are its unique survival techniques?
The master of escaping acts – Houdini!...Or could it be....?
The Common Octopus! Well for starters, handcuffing all eight arms is going to be quite tedious. This fascinating creature is truly the master of disappearing acts among the underwater invertebrates and uses a large collection of techniques to evade or thwart attackers. The most incredible form of defence applied by the common octopus is its ability to hide itself in plain sight. It does this by matching colours, patterns, and even textures of the surroundings; and this modification is instantaneous. How is that possible? Well they do this with a network of pigment cells and specialized muscles in its skin. If its predators only knew...the food they were hunting down was floating right in front of them, they wouldn't just swim by. The frequently tricked fish include the dolphin, shark and eels. However if ever any of these hunters do look past the trickery, they are doused in a cloud of black ink that obscures their view, giving the octopus time to swim away. The black ink also contains a substance that dulls the predator's sense of smell, making the fleeing octopus harder to track. We bet Houdini wouldn't be able to perform these next two acts the common octopus is so accomplished at – the ability to squeeze into ridiculously small cracks and crevices where predators can't follow and the ability to lose an arm in order to escape a predator's grasp and re-grow it later with no permanent damage. It preys on crabs, crayfish, and mollusks, and will sometimes use their ink to disorient their victims before attacking. They also deliver a nasty bite with their beak-like jaws and their venomous saliva helps subdue prey. They are found in tropical and temperate oceans of the world and can grow to about 4.3 feet (1.3 meters) in length and weigh up to 22 pounds (10 kilograms). They are considered the most intelligent of all invertebrates. | 1.744954 | openbmb/Ultra-FineWeb |
file, using
the FAPT
LADDER
mnemonic
editing
function,
then use that file with FAPT LADDER-II.
(2) Conversion A FAPT converting Convert LADDER.
to a memory-card LADDER program can
format also
file
be converted file. format fde, using [MEMORY CARD] of the FAPT to a FAPT LADDER-If program by first
it to a memory-card a program input/output to
format
a memory-card
function
then use that file with FAPT LADDER-II.
(3) Conversion A FAPT converting Some
to a FORMAT-C LADDER program
source file
can also be converted to a FAPT LADDER-II program by first source support file. FORMAT-C source files. Convert the source file type to
it to a FORMAT-C models
FAPT LADDER
FORMAT-C,
then use that file with FAPT LADDER-II.
(4) Conversion
of a ROM format file
can convert a ROM format file to a memory-card format file, as follows:
FAPT LADDER-II
(a) Operation
Select
[OFF-LINE
FUNCTION]
from
the initial
menu,
then
select
F6 (I/O].
The [l/O]
screen appears.
@ Press the c F9> key on the (I.01 screen. The II/O (ROM FILE)] screen appears.
(Note)
No item corresponding key is, however,
to the < F9>
key is displayed
on the [l/O screen.)
The
<F9>
effective.
(@I Selecttng
F2 [READ]
displays
the
II0
(ROM
FILE
> MCARDI
the file.
screen.
Enter
the
ROM file and rnemory
card file names,
then convert
A3-4
APPENDIX 3 CONVERSION
OF SEQUENCE PROGRAM
Fl
KEY KEY
WRITE ( Menory Card -> ROM FORMAT FILE 1
RERO ! Uemory Card c- ROM FORMfllFILE 1
F2
Fig. 6.3.2 (a)
3.3 Convert the PMC Type of Sequence
By changing another the mnemonic file, it is possible
Program
some PMC type sequence program to
to convert
type of it.
3.3.1
Converting by system parameter
PMC type, it IS possible data. parameter. usable
editing
PMC type data by changing system
On the following parameter However, different.
to edit the different
of the mnemonrc format
of the system
functional
instructions
and range
of address
are
CNC TYPE
PMC TYPE
Power
Mate-MODEL B
PMC-PA1 IPA3 PMC-NB/NB2
FS IS-MODEL
[Example:
PMC-RB
+ PMCRC3] and convert the original source program to mnemonic file.
(1) Set the PMC type to PMC-RB (2) Change (3) Set the the system PMC type parameter to
of the mnemonic on FAPT
file to PMC-RC3 LADDER system
with a standard and convert
text editor. the mnemonic
PMC-RC3
frle( -+(2)) to source
program.
A3 - 5
APPENDIX 3 CONVERSION OF SEQUENCE PROGRAM
Original %@A %@O
file (PMC-RB)
Converted %@A %@O Change system 1
file (PMCRC3)
2 BCD 3 NO 4 PMC-RB 7 100 9 YES % %@l 01 ABC-KIKAI 02 S-DRILL
2 BCD
parameter 3 NO
4 PMC-RC3 5 000000 6 50 7 100 % %@l 01 ABC-KIKAI 02 S-DRILL
% %@5 x000 1 YO08 1 % %@E 0 0 1 ID16C 4 OD32A % %@5 x000 1 YO08 1 % %@E 0 0 1 IDlCC 4 OD32A
3.3.2
Convert with signal address converter
CONVERTER FILE NAME FSOT CNV.SYM
APPLICABLE PMC-LjMIM(MMC) (FSO-T)
PMCCNC
TYPE
REFERENCE FANUC PMC
MATERIALS
-+ PMCRAlIRA2IRAYRB lRB2/RB3RB4RB5 RBG/RC/RC3/RC4 (FS16118/20-T) + PMCRAlRA2IRA3IRB lRB2/RB3/RB4/RB51 RBG/RCRC3/RC4 (FS16/18/20-M)
PROGRAMMING (LADDER
MANUAL
LANGUAGE) B-61 863E
FSnM
CNV.SYM
PMC-LMIM(MMC) (FSO-M)
PM-C - CNV.SYM
PMC-P (Power Mate -MODEL C)
PAliPA3 (Power Mate DFH)
-MODEL
(Note)
5).
The converter
file is stored
in the drrectory
APPENDIX
of module
system
floppy
disk (Vol.
A3-6
APPENDIX 3 CONVERSION OF SEQUENCE PROGRAM
[Example:
PMC-P -+ PMC-PA
(1) Set PMC type to PMC-P. and convert the original source program to the mnemonic file. (+A.) (2) Set PMC type to PMC-PAl. And input the source program name and select [END] at edit
mode without editing the ladder program.
(3) Convert the source program(2) to the mnemonic file. (-+B.)
(4) Quit FAPT LADDER. and actrvate any standard text editor. to the converter file (Select the mnemonic file name(3) to edit.) (5) Replace the symbol and comment data of the mnemonic file (PMC-PAl) (PM-CCNV). (6) Replace (-C.) file (PMC-PAl) LADDER. to the ladder data of the original (+D.) and convert the mnemonic file(5) to source program. the ladder data of mnemonic
mnemonic file (PMC-P).
(7) Complete the text editor, and activate FAPT (8) Set PMC type to PMC-PAl.
(9) Select edit mode, and delete the symbol and comment data. Convert file PM-C - CNV.SYM %@2 A. Original file (PMC-P) %@A %@O
G0004.3 G0005.0 60005.2 60005.3 60005.6 G0006.2 G0006.4 G0007.2 60007.5
G68.3 G95.0 G95.2 G95.3 683.7 G75.2 G74.4 668.2 684.5
%@A %@O 2 BINARY B. Converted file (PMC-PAl)
1 2048
% %@l % %@2 % %@3 RD X21.4
3 NO 4 PMC-PA1 X1027.4 X1027.5 X1027.6 X1027.7 %
D. insertron
X23.4 X23.5 X23.6 X23.7
% %@l % %@2 % %@3 c % %@4 % %@5 % b@E ,
WRT 6121.4 RD.NOT X22.3 WRT.NOT 6122.3 SUB 1 SUB 2 WRT G121.4 RD.NOT WRT.NOT SUB 1 SUB 2 b b@E X22.3 6122.3
A3-7
APPENDIX 3 CONVERSION OF SEQUENCE PROGRAM
3.3.3
Data
Using data in a sequence
(such as title, symbol 8 comment, sequence
program for another program
ladder, message, and 110 module method. data) in a sequence
program
can be used for another
program,
by the following
The range
of addresses
used varies manual
from one model
to another.
They
may have
to be modified.
Refer to the programming
of the respective
models.
[Example
: Using the symbols
& comment
data of the PMC-FIB for the PMC-RC3]
%@A %@O 2 BCD 3 NO 4 PMC-RB 7 100 9 YES %
%@l
%@A %@O 2 BCD 3 NO 4 PMC-RC3 5 000000 6 50 7 100 9 YES % %@l
%@2 x000.0
x000.1
%
2PX.M 2PY.M ;I--
Insert
%@2 x000.0 x000.1
ZPX.M ZPY.M
%
%@E
Converting
a step sequence
program
according
to the model
Usually, However, converted
mnemonics
are used to convert
a ladder
program
according program
to the model. for the PMC-RB4(STEP SEQ) is
a ROM ftle is used only when a step sequence for the PMCRC4(STEP by which the source SEC). program is described
The operation, program
C: f DATA Y SAMPLER6 as follows.
is converted
to the source
C: Y DATA f SAMPLERC.
[Procedure] Compile the step sequence a object file. program C: f DATAY SAMPLERB for PMC-RB4 (STEP SEQ)
and create
(Create
C: Y DATAY
SAMPLERB.
MEM)
Create
the new source
program
C: Y DATA?
SAMPLERC.
The name of object
file, which is creattng
by (1.1, is change
as follows
from command
line.
C: y > RENAME
C: -Y DATA Y SAMPLERB
C: Y DATA Y SAMPLERC
A3 | 1.203376 | EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample |
3)2, which formed by reaction between CaCO3 and HNO3 on the Ca-rich dust particles26. Almost no NH4 + (cNH4 +) was found in PM2.5–10. It was reasonable that preferential formation of (NH4)2SO4 occurred at first in fine mode according to the good correlation (r 2 = 0.94) between SO4 2− and NH4 +. Because the Asian dust included substantial amount of calcium23, 27, reaction between Ca(CO3)2 and (NH4)2SO4 on the surface of dust particles may result in the loss of NH4 + and the formation of insoluble CaSO4 28.
Figure 2
Figure 2
Footprint of the dust plume simulated by the HYSPLIT dispersion model ((a) detailed description of the simulation is shown in Method) during the dust-influencing period (on the left). The tracer particles were released from 500–1000 m above ground level at the observation site and dispersed for 5 days. The mass concentration of the particle (in mass/m3) ranged from 0 to 1000 m every 3 hours. The figures on the right show corresponding mass concentrations of water-soluble inorganic matter (SO4 2−: blue; NO3 : red; NH4 +: green; and Ca2+: yellow) in total PM2.5 (inner Pi-chart) and PM2.5–10 (outer gray ring). The maps were drawn by the software Igor Pro, _URL_
Hourly averaged depolarization ratio of dust particles decreased as equivalent ratio of cNO3 /cCa2+ increased with a correlation coefficient r 2 = 0.76 (95% confidence interval, Fig. 3a). It indicated that, the more cNO3 mass present in the coarse mode, the more possibility to lead to the morphological change of dust particles due to heterogeneous reaction between CaCO3 and HNO3 on the dust particles in polluted urban area26. The formation of Ca(NO3)2 coating on the surface of dust particles reduced the critical super-saturation of dust particles and have strong potential to serve as CCN29. Impact of Na+ (1.4%), Cl (0.8%) and K+ (1.3%) in PM2.5–10 on the decrease of depolarization ratio of dust particles were limited.
Figure 3
Figure 3
Relationship between the depolarization ratio of dust particles (Dp = 5 μm) and equivalent ratio of NO3 /Ca2+ (a) and the mass fraction of aqueous matter in the coarse mode (b). The colored circles in the plot represent the data during dust impact period from March 28 to April 1, 2015. The standard deviation (error bar) of depolarization ratio of dust particle was calculated for the dataset corresponding to filter sampling period. The color triangles indicate another weak floating dust case from April 9 to April 11, 2015. The backward trajectory analysis using HYSPLIT indicated that the air mass also came from northwest. Although none of the same dust plume was observed, the impact of anthropogenic pollutants on the depolarization ratio of the dust particle was similar.
Morphological changes in dust particles were a synergistic effect of both pollutants and water content on the dust surface. Increase of relative humidity (RH) plays a vital role in decrease of the depolarization ratio of the dust particles in the presence of Ca(NO3)2 coating. The correlation coefficient between the depolarization ratio of the dust particles and the mass fraction of the aqueous content (water + Ca2+ + NO3 ) in PM2.5–10 was r 2 = 0.66 (95% confidence interval, Fig. 2b). Because all Ca(NO3)2 was deliquescent in the coarse mode when RH > 20% and underwent a exponential increase30, the volume fraction of aqueous mass in the particles was calculated assuming density of dust particles of 2.5 g/cm3 31. From March 29 to March 30, the ambient RH was 48 ± 9%, the aqueous mass was estimated to be 28.5 ± 10.8 μg/m3, accounting for 17% of total volume in PM2.5–10. However, at the end of dust episode on March 31, aqueous mass was 55.5 ± 13.7 μg/m3 on average, accounting for as high as 70% in the total volume of PM2.5–10 at RH = 86%. Large fraction of aqueous matter well explained the obvious decrease in depolarization ratio of dust particles.
Although the observed depolarization ratio of the coated dust particles decreased evidently, we considered such dust particles as being 'quasi-spherical' because the observed minimal depolarization ratio (0.34) was still higher than that (0.08) of standard spherical particles (see Method). The polarization property of randomly oriented elongated ellipsoid particles was simulated on the basis of the T-matrix methodology32, 33. A reception angle of 120 degrees relative to its incident light direction was the same with the POPC instrumentation. Theoretical calculation revealed that the depolarization ratio of dust particles was mainly determined by its aspect ratio (defined as the ratio of the longest dimension to its orthogonal width). Variations in the particle's refractive index (the real and imaginary part) can only explain limited depolarization variability (5%). As indicated in Fig. 4a, the observed maximal depolarization ratio (0.5) in this study corresponded to an aspect ratio of 1.7 for uncoated dust particles. During the polluted dust period on March 31, the aspect ratios of the dust particles were estimated to be 1.6 as the depolarization ratio of the dust particles decreased to 0.34, as shown by the yellow shading in Fig. 4a. Providing that the dust particles were in a standard ellipsoid configuration and underwent partial hygroscopic growth only at the shortest projection, a 70% increase in the volume of coating matter on the dust surface would cause the aspect ratio to decrease, at most, from 1.7 to 1.3. We concluded that, the moderate decrease in aspect ratio of the dust particle demonstrated that the deliquescent and hygroscopic processes of Ca(NO3)2 occurred on the entire surface of the dust particles. The depolarization ratio of the particles with Dp = 1 μm showed a linear decrease with increase of aspect ratio (Fig. 4b).
Figure 4
Figure 4
Theoretical simulation of the depolarization ratio of randomly oriented elongated ellipsoid particles as a function of the aspect ratio at Dp = 5 μm (a) and Dp = 1 μm (b) on the basis of the T-matrix methodology.
Discussion
Sulfate was frequently observed in the dust samples from Asian continent by electro-microscopy inspection10, 13. In this study, a negative correlation (r 2 = 0.75, 95% confidence interval) between depolarization ratio of dust particles and mass fraction of cSO4 2− in PM2.5–10 was found. The good equilibrium in PM2.5–10 implied that most of cSO4 2− in the sample may formed as CaSO4, the latter of which has small solubility (0.3%) and high deliquescent point (RH = 98%), and the large amount of cSO4 2− in PM2.5–10 was probably due to the significant dilution during the pretreatment of filter samples on the basis of ion chromatography analysis. In the real atmosphere, hygroscopic effect of CaSO4 on the dust morphology seemed to be negligible. One of the possible explanation is the heterogeneous reaction itself may modify non-sphericity of dust particles, As demonstrated in laboratory experiment, chemical processes of sulfuric acid (H2SO4) on dust directly led to evident dust surface modifications34; however direction conversion from CaCO3 to CaSO4 occurred more slowly and incomplete. Another contribution may rise from direct heterogeneous reaction between CaCO3 and (NH4)2SO4 at RH > 60% condition28. It was worthy noting that mass concentration of cNO3 was much larger than cSO4 2− in dust plume, with a cNO3 /cSO4 2− mass ratio of 1.5–2.5, much higher than the value (0.16–0.5) reported ten years ago during the polluted dust period in Beijing27. It demonstrated that NOx has exceeded SO2 as the most important pollutant in Beijing area, in accordance with bottom-up emission inventory estimation35. Consequently, more direct absorption of reactive HNO3 on the alkaline surface of dust particles becomes important to form strong hydrophilic compounds such as Ca(NO3)2 29, 36, the latter of which absorb water vapors at relative humidity (RH) above 10%30, and apparently will affect to serve as cloud condensation nucleus (CCN).
Another important point, mineral dust and organic aerosols have traditionally been studied separately, however, more and more observations with a variety of means pointed out that adsorption of short-chain oxygenated hydrocarbons and low-vapor-pressure organic species (i.e. carboxylic acid) onto the | 1.770012 | EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample |
A Pulangiyen youth group called Kabatan-on Lumadnong Pulangiyen (KLP) is regenerating 12 hectares of previously cleared forest areas in Bendum, an upland village in Northern Mindanao, by gathering lauan seeds from mother trees in the puwalas (maturing forest) and transplanting local tree species. This assistance in the natural regeneration of forests is expected to maintain and enhance the remaining natural biodiversity of the regenerating forests.
These youth are forming a life pattern under new circumstances and the first effort is to go back and reclaim the past knowledge of utilizing forests during both the seasonality of activity and over a longer cycle of shifting land use.
The forest area has declined and is retreating to the higher slopes. The youth now seek to draw the forest back down by regenerating 12 hectares that started with the previously barren log deck and areas cleared for unsustainable agriculture.
The KLP wants to establish the youth's role in the community, their identity and inheritance through the gaup (ancestral domain). They identify themselves by their rivers and is restoring their remaining forest to strengthen cultural integrity while seeking greater equity in a globalizing culture through recognition of their tradition. The KLP also seeks dialogue to strengthen and unite the cultures of Upper Pulangi and maintain peace in the whole area. | 2.046483 | m-a-p/FineFineWeb |
Autism Dad: I Need to Know
1) How do you express your love to your autistic child? And is he/she able to understand and reciprocate with words or other gestures?
2) In general, how do you teach your autistic child to understand, cope with and express a range of emotions, from love and joy to frustration and anger?
1) My second born, Ryan, was speech delayed, which worried us that he too was headed for an autism diagnosis. For whatever reason, he caught up quickly and now we can't shut him up! I would like to hear from parents who have more than one autistic child. How do you cope and how have you had to adjust your expectations for your family?
2) If you have only one child on the spectrum, how do you ensure that your normal functioning child doesn't feel he/she is any less loved? How do you talk to this child about his autistic sibling? When is the appropriate age to do so?
1) Does anyone have tales of an autistic child who also shares Ben's grocery shopping compulsion?
2) What compulsions does your child have? What have you done to address them?
3) Ben's diet is generally limited to pizza and hot dogs and Mac n Cheese (but just the brand his mother uses). He will eat ice cream but not frozen yogurt. What are your experiences with broadening an autistic child's diet? What has worked for you?
1. If you are a single dad or mom, what challenges have you encountered communicating and coordinating with your ex?
2. How have you and your ex managed to put animosity or resentments aside and do what is right for your child?
3. Can you recall an individual who helped guide you in your journey and navigate the often convoluted autism services delivery system?
4. Do you believe in fate or the idea that things in this universe happen for a reason? Or is what I describe in this essay simply a series of coincidences? | 1.832377 | openbmb/Ultra-FineWeb |
When discussing antonyms for money, we delve into words that represent the opposite concept of currency and wealth. Antonyms are words that express a contradictory meaning to another word. In this case, we explore the vocabulary that stands in contrast to the notion of financial value and capital.
In the realm of language and semantics, antonyms play a crucial role in offering contrast and diversity in our expressions. By identifying words that are antonyms for money, we gain a deeper understanding of the various concepts and values that exist in our linguistic repertoire. This exploration can provide insight into different aspects of wealth, poverty, and resources through the lens of language.
By examining antonyms for money, we invite a reflection on the multifaceted nature of our economic discourse. This investigation allows us to expand our vocabulary and consider alternative perspectives on the significance of wealth and its absence. Through this linguistic journey, we can uncover nuances and complexities in how we communicate about financial matters and the broader spectrum of values that shape our interactions with money.
35 Antonyms for MONEY With Sentences
Here's a complete list of opposite for money. Practice and let us know if you have any questions regarding MONEY antonyms.
|Sentence with Money
|Sentence with Antonym
|Money cannot buy happiness.
|Poverty does not guarantee misery.
|She decided to donate all her money to charity.
|She exhibited great generosity by giving away her possessions.
|He is known for his wise handling of money.
|He is criticized for his reckless spending and lack of frugality.
|The king had more money than he could possibly spend.
|The kingdom was in a state of abundance, with richness overflowing everywhere.
|His money made him powerful in the business world.
|His lack of wealth did not hinder his drive and determination in the industry.
|The country experienced an era of great money growth.
|The town fell into a state of economic decline and lacked prosperity.
|She borrowed a large sum of money from the bank.
|She repaid her debts diligently to live a life free of indebtedness.
|The homeless man had no money and nowhere to go.
|Despite their lack of destitution, they were content and full of joy.
|The family lived a simple life with very little money to spare.
|Their extravagant lifestyle showed no signs of austerity or financial discipline.
|The company filed for bankruptcy as it ran out of money.
|Their sound financial management ensured the company's solvency and avoided insolvency.
|Due to the limited amount of money, they had to prioritize their spending.
|The bountiful resources allowed them to live without worry about scarcity.
|He lost all his money in the stock market crash.
|Despite being penniless, he held onto hope and future prospects.
|Growing up in poverty, she understood the value of money early on.
|Having never experienced poverty, she struggled to comprehend the worth of money.
|She inherited a vast sum of money from her late grandfather.
|Having no inheritance to rely on, she built her fortune from scratch.
|The neighborhood exuded an air of money and affluence.
|Surprisingly, the modest community lacked any signs of affluence or wealth.
|The extravagant mansion was a display of opulence and money.
|The modest cottage showed no signs of opulence and lived a life free of luxurious excesses.
|The money for essentials was running low in the household.
|Surprisingly, there was no dearth of goods or products in the market.
|With no money left, he spiraled into a state of depression.
|Even without any misery, he still had a positive outlook on life.
|He hoarded money and refused to spend a penny on himself.
|She was known for her generosity and selfless sharing, unlike the infamous Scrooge.
|They spent money extravagantly on parties and luxury goods.
|They were known for their simple living and shied away from all things lavish.
|Practicing thrift helped her save a considerable amount of money.
|Her careless spending patterns disregarded the essence of thrift and smart saving habits.
|The lack of money led to a financial famine in the region.
|Despite having enough famine, the community managed to thrive and stay prosperous.
|Their love for money led to an excess of material possessions.
|Their minimalist lifestyle rejected the notion of excess and focused on essentials.
|Long before money existed, people engaged in barter trading.
|In a world without barter, the concept of monetary exchange dominated all transactions.
|She lived in a posh neighborhood, flaunting her wealth and money.
|In stark contrast, she chose a more humble residence, devoid of any signs of posh extravagance.
|The charity aimed to help the indigent individuals without any money.
|Despite their wealth, they reached out to the indigent population and provided assistance.
|They had plenty of money to spare for the luxurious vacation.
|With very little plenty, they made the most out of their simple and content lives.
|Their money-saving strategies were practical and truly economical.
|Instead of being economical, they wasted their wealth on unnecessary expenditures.
Final Thoughts about Antonyms of MONEY
In conclusion, there are various antonyms for money, such as poverty, debt, and need. While some people may enjoy prosperity, wealth, and abundance, others struggle with financial difficulties. This contrast highlights the diverse financial situations individuals may face, shaping their lifestyles and opportunities.
Understanding these antonyms for money can provide a deeper insight into the societal disparities and financial challenges that exist worldwide. By acknowledging the various aspects of wealth and poverty, we can work towards creating a more equitable and inclusive society where everyone has the opportunity to thrive regardless of their financial status. | 2.212697 | m-a-p/FineFineWeb |
Open access peer-reviewed chapter
Arthroplasty as a Choice of Treatment in Hip Surgery
Written By
Mehmet Umit Cetin, Yaşar Mahsut Dincel and Yavuz Selim Kabukcuoglu
Submitted: August 31st, 2018 Reviewed: October 15th, 2018 Published: November 7th, 2018
DOI: 10.5772/intechopen.82031
Chapter metrics overview
776 Chapter Downloads
View Full Metrics
Abstract
The hip joint bears the most load in the human body. For this reason, it carries the potential risk of degenerative arthritis in individuals with a functionally active lifestyle. The main goal in the treatment of degenerative arthritis is to achieve pain relief and create a hip joint range of motion close to normal. Even today, it is not possible to transform the hip joint, which has been degenerated due to several reasons and worn out due to the physiological properties of the cartilage structure, back to its natural state. Osteotomies, resection arthroplasties and hip arthrodeses, which are designed to compensate the load distribution affecting the hip and relieve the pain, are still employed methods. Total hip arthroplasty, on the other hand, is an alternative solution for the problem. Cemented, cementless and hybrid methods are widely used for this purpose in total hip arthroplasties. The purpose of hip prosthesis surgery is to shape the bone tips and to fill the fragments with various materials and keep these two structures as separate surfaces. Total hip arthroplasty consists of a femoral component placed in the medullas of the femur and an acetabular component placed in the acetabulum. In this article we will review the aims, causes, types and techniques of total hip arthroplasty.
Keywords
- acetabulum
- arthritis
- femur
- rehabilitation
- total hip arthroplasty
1. Introduction
The hip joint bears the most load in the human body. Therefore, a functional lifestyle naturally carries a potential risk of degenerative arthritis. In a hip with degenerative arthritis, the main purpose of the treatment is to relieve the pain and create a hip joint range of motion close to normal. Even today, it is not possible to transform the hip joint, which has been degenerated due to several reasons and worn out due to the physiological properties of the cartilage structure, back to its natural state.
Osteotomies, resection arthroplasties and hip arthrodeses, which are designed to compensate the load distribution affecting the hip and relieve the pain, are still employed methods. Total hip arthroplasty (THA), on the other hand, is an alternative solution for the problem. Cemented, cementless and hybrid methods are widely used for this purpose in THAs.
Three different methods, including unipolar hemiarthroplasty, bipolar hemiarthroplasty and THA can be applied in femoral neck fractures, taking the patient's age, functional status before fracture and other accompanying diseases into consideration.
Total hip arthroplasty is considered as one of the most successful orthopedic surgery methods today [1]. Ninety percent of more than 1 million THAs per year worldwide are performed for treatment of osteoarthritis. The aging world population and increasing obesity indicate that the need for THA will increase [1].
A successful joint prosthetic surgery can be achieved with clinical, functional and radiological evaluations. However, it should be noted that many other factors, such as the material used, patient's age, surgical technique and fixation method affect the results. Although hip arthroplasty can be performed successfully in many patient groups including the young ones, it should be kept in mind that young patients in particular should avoid heavy physical activities in order to prevent early failure of the prosthesis [2]. The average prosthetic survival in hip arthroplasty is 10 to 15 years. Nevertheless, there are patients who do not have complaints even after 25 years [3].
Advertisement
2. Total hip arthroplasty
Total hip arthroplasty consists of a femoral component placed on the medulla of the femur medulla and a component placed in the acetabulum. The cementless type of the acetabular component consists of an outer cup attached to the acetabulum and a second cup which articulates with the femoral component.
The function of the femoral component is to replace the resected femoral head and neck. As the length of the femoral neck increases, the vertical height and the medial stem-head distance also increase. In routine practice, the neck used is 8–12 mm long. The relationship between the femoral neck and implant is established based on anteversion or retroversion on the coronal plan. The vertical height of the femoral neck is measured starting from the lesser trochanter. Since the depth at which the prosthesis is placed in the femoral metaphysis to adjust the height of the vertebra is definite, the level of osteotomy is not interfered. Instead, the neck length should be adjusted.
The distance between the center of the femoral head and the stem is the medial offset distance. A wider collodiaphyseal angle shortens the moment arm of the abductors and increases limping. If this angle is narrow, the load on the stem increases and causes loosening or breakage. The vertical height of the rotation center decreases in varus hips. Accordingly, the medial offset is relatively high. The height of the greater trochanter is not an accurate indicator for the center of the head. The vertical height and medial offset in excessively varus-valgus hips are difficult to restore. Therefore, the leg length and vertical height are corrected to avoid the possibility of facing a lower extremity length discrepancy and have a biomechanically stable hip in the postoperative period [4].
Anteversion of the femoral neck is important in ensuring stability. A retroverted neck causes posterior dislocations whereas anteverted neck causes anterior dislocations. For rotational stability, the proximal part of the femoral component should fill the metaphyseal cavity completely.
The components are designed either for cemented or cementless implantations.
2.1 Cemented acetabular components
The acetabular component is thickly coated with, preferably with a layer of 6–8 mm, high-density polyethylene [5]. Stability is increased by filling cement into the vertical and horizontal grooves. Protrusions of 3-mm-high are used to increase the stability between the prosthesis and cement [5].
There are a number of factors to consider when placing the cemented acetabular components. When the acetabular component is inserted, it should maintain the normal anatomical position at 45° of inclination and 15° of anteversion. The outer surface of the acetabular component should be wrapped with at least a cement layer of 2–5 mm [6]. The boundaries of the acetabular component should be within the boundaries of the bone acetabulum.
2.2 Cemented femoral components
The most commonly used alloy is the chromium-cobalt alloy because of its high elastic modulus, which is a feature that reduces the stresses in the proximal cement layer. The medial section of the stem should be wide in the transverse section. Preferably, the lateral edge should be even wider. Thus, during compression, the load is balanced over the proximal cement mass. The onset of failure in cemented components is seen in the vicinity of the prosthetic-cement complex.
The stem should be planned to fill 80% of the transverse section of the medullary canal and the femoral component should ideally be inserted in the neutral position, in valgus position, or in varus position below 5° [7]. The risk of progressive loosening, cement fracture, proximal bone resorption is higher in patients in whom the prosthesis is inserted in varus positions above 5°. A cement layer of 2 mm thickness should be positioned 4 mm distal of the metaphyseal region of the proximal femur, and second-generation or third-generation cementing technique should be used in order to achieve the stability of the femoral component, lengthen the survival period of the implant and prevent loosening [8].
2.3 Cementing techniques
Along with the advances in surgical techniques, cementing techniques have also improved [9].
2.3.1 First-generation cementing technique
In this technique, cement is mixed manually. It is a technique that requires the least preparation of the medullary canal for prosthesis fixation. The femoral canal is opened, washed and aspirated. Cement in the dough form is applied by fingers. The prosthesis is placed manually in the neutral position (without varus or valgus). The shape of the femoral handle is sharp-edged to ensure high force transmission.
2.3.2 Second-generation cementing technique
The cement is mixed manually and applied using a "cement gun." The spongious bone in the medullary canal is removed off till the endosteal surface, and is dried after brushing and pulsatile irrigation. Plug is inserted into the medullary canal to prevent distal cement extravagation. Following the retrograde application of the cement, the prosthesis is placed in the neutral position manually or using the distal centering methods. The sharp corners of the pro | 1.311458 | EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample |
Seminar 10/19/2011 - Mikhail Belkin, University of Texas-Austin: Nanoscale Imaging and Plasmonic Devices in Infrared
Abstract: In this talk, I will present the two of our research projects. I will start with a simple technique for nanoscale mid-IR microscopy that we have developed recently. Subwavelength resolution is achieved by detecting optical absorption through measuring local photothermal expansion with an atomic force microscope (AFM). Spatial resolution is determined by thermal diffusion length, which is smaller than 50 nm in typical chem/bio samples excited with nanosecond laser pulses. Tunable quantum cascade lasers are used as light sources. To detect minute sample expansions, we moved the repetition rate of the laser pulses in resonance with the AFM cantilever bending frequency and benefited from the resultant resonant enhancement. We were able to take mid-IR images and vibrational spectra of polymer films as thin as 50 nm with /170 spatial resolution. The extension of this method to THz spectral range and possible improvements to achieve monolayer sensitivity will also be discussed.
In the second part of the talk, I will present the results of our project aimed to develop broadly-tunable monolithic bandpass filters based on unique properties of long-range surface plasmon-polaritons. A small change of the refractive index of the cladding material in these filters may be translated into a large bandpass wavelength shift. We present experimental results with proof-of-principle devices operating at telecom wavelength shifts in which 0.004 change of the refractive index of the cladding material is translated into 210 nm bandpass tuning.
All seminars are held on Wednesdays from 12:00 noon-1:00 p.m. in the Bowen Hall Auditorium Room 222. A light lunch is provided at 11:30 a.m. in the Bowen Hall Atrium immediately prior to the seminar.
Location: Bowen Hall Atrium
Date/Time: 10/19/11 at 12:00 pm - 10/19/11 at 1:00 pm
Category: PRISM/PCCM Seminar Series | 1.401073 | m-a-p/FineFineWeb |
Supporting Unskilled People in Manual Tasks through Haptic-Based Guidance
Supporting Unskilled People in Manual Tasks through Haptic-Based Guidance
Mario Covarrubias (Politecnico di Milano, Italy), Monica Bordegoni (Politecnico di Milano, Italy), Umberto Cugini (Politecnico di Milano, Italy) and Elia Gatti (Politecnico di Milano, Italy)
DOI: 10.4018/978-1-4666-4422-9.ch048
OnDemand PDF Download:
No Current Special Offers
This chapter presents a methodology that the authors developed for the evaluation of a novel device based on haptic guidance to support people with disabilities in sketching, hatching, and cutting shapes. The user's hand movement is assisted by a sort of magnet or spring effect attracting the hand towards an ideal shape. The haptic guidance device has been used as an input system for tracking the sketching movements made by the user according to the visual feedback received from a physical template without haptic assistance. Then the device has been used as an output system that provides force feedback capabilities. The drawn shape can also be physically produced as a piece of polystyrene foam. The evaluation methodology is based on a sequence of tests, aimed at assessing the usability of the device and at meeting the real needs of the unskilled people. In fact, the system has been evaluated by a group of healthy and unskilled people, by comparing the analysis of the tracking results. The authors have used the results of the tests to define guidelines about the device and its applications, switching from the concept of "test the device on unskilled people" to the concept of "testing the device with unskilled people."
Chapter Preview
The aim of the present chapter is to describe the test methodology in order to control and measure the efficiency of the haptic guidance device.
As the design of the haptic guidance device is a widely inter-disciplinary project involving experts from various disciplines, including such diverse areas as pedagogy, psychology, computer science, mechanical, mechatronic, and product design, this chapter is written by the intent to make not only the specific methods and procedures of accuracy testing intelligible for readers from various disciplines, but also its motivations. Therefore, not only the methodology itself and the tools that form the basis of this methodology will be described in a more detailed way.
This chapter should also serve as a set of practical guidelines for the testing procedures (but not yet as a detailed manual). It is written with the intent that experts involved in the accuracy testing procedures in the design of the haptic guidance system be able to use it as a detail reference work for actual practical activities within efficiency and accurate testing: explaining the tools to be used, the measurements to be taken, the way of data collecting.
This chapter is also meant to be intelligible, explanatory and enough detailed for those who approach the haptic guidance device and its testing activities professionally somewhat from the outside, but are nevertheless deeply involved via other ways.
By implementing the haptic guidance device we need to prove the followings: 1) the haptic support work; 2) it does suit the scientific facts and the experiences about the nature of the given problem; and 3) it is safe. In other words, we have to offer evidence-based.
A point that is crucial in the logic of evaluation of the accuracy of the haptic guidance device is multiple comparisons. We assess the unskilled people initial state and then its changes due to the use of the haptic guidance device, and we compare before and after measurement results. Moreover, we create an appropriate control group.
At first sight, it seems relatively simple a procedure, but there are several further considerations in order to reach High Methodological quality and to get strong evidence. The following brief review is based on Geyman and colleagues research (Geyman, Deyo, & Ramsey, 2000).
- 1.
A prospective "before-after" evaluation is needed. The retrospective efficiency testing is not as accurate, and could be influenced by interpretations, latter experiences.
- 2.
We have to select the subjects carefully and advisedly. Obviously, the main principles of the selection depend on the goal of the haptic guidance system, but there is no doubt, that solid baseline assessments (e.g. formalized, standard evaluation, etc.) are needed before we started the accuracy testing of the device.
- 3.
After the baseline assessments, we assign the subjects randomly to the experimental group.
- 4.
The sample (the number of experimental subjects) has to be large enough for an appropriate statistical analysis. In this way, by applying and adequate level of statistical significance as a criterion, we will assure the accuracy of the haptic guidance device providing confirmatory results.
- 5.
Testing and evaluation of the results may easily lead with possible errors.
To accomplish our aim it is necessary to assess how the sketching control movements under haptic feedback are affected in people with motor and visuo-spatial disorders, principally by Down syndrome. Sketching is one of the most complex human activities in which the hand movements are controlled by the central nervous system, which regulates the activity of the hand and arm muscles to act in synergy. The central nervous system receives dynamic feedback information from visual sensors and from other body sensors located on the skin, muscles, and joints, while regulating the motor output.
Complete Chapter List
Search this Book: | 1.564549 | Zyphra/Zyda-2 |
The Coronavirus pandemic (COVID-19) has become one of the major turbulences to the global economy and financial markets and the supply chains.
The costs of these unprecedented developments for countries are huge, including the direct disruption to global supply chains, weaker demand for imported goods and services, and the wider regional deterioration in international businesses.
Risk aversion got bigger in financial markets, with the USA 10-year interest rate falling to a record low and equity prices declining sharply, commodity prices have dropped, and business and consumer confidence have curved down immensely.
Major institutions and banks have cut their forecasts for the world economy and uncertainties of the virus impact on the global economy have stunned industries worldwide.
The source of the Coronavirus, Hubei province had a GDP of more than $500 billion. Bloomberg's recent report showed that Hubei contributed 39.4% of the total phosphorus mining in China, 11.9% of apparel produced in China, 11.6% of fertilizer, 8.9% of automobiles and 4.9% of cement between January and October 2019.
While the World Economic Forum's prediction, China's growth in the 1st quarter of 2020 is expected to slow down to 4.5%.
"From an economic perspective, the key issue is not just the number of cases of COVID-19, but the level of disruption to economies from containment measures," Ben May, Head of Global Macro Research at Oxford Economics, said in a recent report.
The government of China locked down cities, controlled movements of millions and suspended business/manufacturing operations — moves that will slow down the world's second-largest economy and drag down the global economy along the way.
The global textile and apparel industry has witnessed its standstill from manufacturing to consumers. Major retailers and brands are stopping current and future orders.
For instance, China exported more than $35 billion worth of clothing apparel and made-up textile products to the USA alone. By all accounts, this figure is likely to drastically not only in the US but also in the hard-hit EU and around the globe.
Trading partners likely to report the largest impact from a 2% reduction in Chinese exports of intermediate inputs are the EU (estimated at $16 billion), US ($5.8 billion), Japan ($5.2 billion) and South Korea ($3.8 billion).
Opportunities for others
This global economic pause due to Coronavirus has given other smaller economies a chance to step up. For example, Pakistan with a complete textile and apparel supply chain existing in the country can profit from the slowdown in trade flow between China and the USA and the EU in textile products as it can capture some of the fashion market shares.
From 2013 and 2018, around US$1.1 billion was added to the exports of articles of apparel and clothing to the EU. Furthermore, exports of made-up textile articles improved to $650 million between 2013 and 2018 due to the GSP Plus preferences.
Pakistan Bureau of Statistics data shows, exports boomed 13.82% year-on-year in February 2020. Amidst a global slowdown in trade, exports from Pakistan have gained by 3.65% in the current FY. And the country's exports to the EU increased from US$6.3 billion in 2013 to US$8 billion in 2018.
This shows that the unilateral trade incentives provided by the European Union to Pakistan in the form of GSP Plus status did help boost export sales to the region and limit what otherwise could have been a complete decay of the export sector between 2013 and 2018.
The trade linkages established between Pakistani exporters and their clients can help increase exports and tap newer markets as supply chains are threatened due to the spread of the coronavirus.
Pakistan should continue with its policies to boost total exports. Although the growth in global trade is likely to slow down this year, Pakistan must contemplate in developing its export sector to take advantage of opportunities as a result of challenges reported by the large manufacturing powerhouses. | 1.048561 | m-a-p/FineFineWeb |
of free access to their Nuke, Modo, Mari, and Katana tools.
- TrendMicro is providing 6 months free access to their consumer internet security product, Trend Micro Maximum Security, for people working from home.
- Trend Micro is also researching how COVID-19 is being used in a variety of malicious campaigns including email spam, BEC, malware, ransomware, and malicious domains.
- To support remote collaboration, Trimble has expanded the capabilities of its Trimble Connect Personal license at no cost to users. Users will be able to create more projects, invite more project members, and access more storage from March 19 to June 19.
- Twilio is a cloud communications platform that helps governments and businesses engage constituents and customers across channels – text, voice, video, and more. For government agencies, Twilio is offering discounts on services through its partners' GSA Schedule 70 and other contracting vehicles.
- Public services, health organizations, and NGOs are using Twilio to improve their COVID-19 response initiatives:
- Reduce public health hotline backlogs and wait times: Deploying automated chatbots and other self-service capabilities to share public service information and answer frequently asked questions will put the right information in the hands of citizens faster and more efficiently while freeing specialists to tackle the complex requests only they can handle.
- Combat misinformation with proactive alerts: Proactive SMS, voice, and email notifications enable organizations to keep their constituents up to date on the latest information based on trusted sources of public health.
- Reach more people by using preferred communication channels: Effective containment of the virus is largely dependent on providing everyone in Coronavirus-affected areas with the right trusted information to protect themselves and their families. This requires delivering your message over channels where end-users in need of assistance prefer to communicate.
- Automate appointment reminders and scheduling: You can set up reminders via email, text, and/or voice to reduce no shows and ensure that appointment slots which are canceled can be offered to others. Many organizations also offer a text in option to schedule appointments in the first place.
- For non-profits and social impact organizations, Twilio.org offers benefits through its Impact Access Program. More information is available here.
- Twilio is donating $1.5 million to organizations that are leading the medical responses to COVID-19 and low-income areas that are being affected by the virus. They are also matching employee donations 2:1 for charitable organizations focused on COVID-19 response.
- Twilio has created a chatbot for frequently asked questions about COVID-19. The template for that chatbot is available to public health agencies, hospitals, and health-focused nonprofits.
- Twilio Video is now free for three months for healthcare, education, and nonprofit institutions.
- Twilio.org announced a $2 million grant round for crisis lines supporting people impacted by COVID-19. The program aims to help more people in crisis by scaling program services using cloud technology.
- Twilio is working with health departments around the world to develop contact centers for reaching COVID-19 patients and their contacts.
- Workday joined a collaborative of 25 companies to raise $22 million to address the COVID-19 pandemic. Workday donated a total of $1.5 million to the Silicon Valley Community Foundation, the Centers for Disease Control and Prevention, and the United Nations Foundation. They will also match employee donations to the CDC Foundation, COVID-19 Solidarity Response Fund for WHO, Direct Relief, and Doctors without Borders.
- Workday announced a new partnership with Salesforce to help organizations safely return to work.
BSA | The Software Alliance (_URL_) is the leading advocate for the global software industry before governments and in the international marketplace. Its members are among the world's most innovative companies, creating software solutions that spark the economy and improve modern life.
With headquarters in Washington, DC, and operations in more than 30 countries, BSA pioneers compliance programs that promote legal software use and advocates for public policies that foster technology innovation and drive growth in the digital economy. | 1.129144 | openbmb/Ultra-FineWeb |
The Promise of Artistic Process
Here's how Social Emotional Learning aligns our curricula with the standards, particularly in a virtual teaching world.
By NAfME Members Lori Schwartz Reichl, Fran Kick, and Scott Edgar
"An ounce of performance is worth pounds of promises."
— Mae West
The pandemic has limited the ability of performing arts educators and students to perform to the degree we once had. In many schools, reductions have been placed on the quantity of students permitted in a given learning area, restrictions have been placed on how close students can be spaced from one another, rehearsal and performance venues have remained off-limits, and opportunities for performance have been cancelled or postponed. Safety, which needs to be educators' prime focus as we continue to move forward this academic year, is dictating how frequently performances occur and how differently they look, sound, and inspire.
Many ensemble directors will confirm that performing is the Core Music Standard for which we spend the most time preparing. It's the ultimate result of an ensemble's preparation and often its motivation. However, the National Core Music Standards in the United States also include Creating, Responding, and Connecting. What happens when the standard of performing is restricted? Will music educators consider other performance avenues, both live and virtual? Or, will we perhaps focus on the three remaining standards with greater emphasis? The promise of artistic process still exists. Are we, as music educators, acknowledging and accepting this?
Prior to the pandemic, a number of music educators might have painted themselves into a corner by defining music education primarily based on performance success. However, at the onset of the pandemic, if perusing social media, one may have seen music teachers posting in a frenzy about not being able to "do" what they and their students once did—perform. Quotations such as: "I have to lower my bar of expectations." "What will we do now?" and "I'm a band director!" were plastered everywhere. The idea of the concert band, or the concert choir, or the full-orchestra, or the dance company, or the full ensemble versus the individual being the focus of the arts curriculum has permeated our thinking for years. Yes, the full ensemble is important. Yet, it is the individual musician/performer who is ultimately the primary core contributor in any ensemble.
When the individual student performers improve, the entire ensemble will improve. Many performing-ensemble directors forget that our licensure is in the field of music education, which involves teaching students how to learn music—not just perform it. It is the individual student and their development as a person, learner, and musician that makes any program successful.
If we define our role of band director, choral conductor, or orchestra conductor as fixers of notes and rhythms in search of a clean performance, we're going to find ourselves not meeting our students' needs this school year. The trauma our students experienced during months of not engaging in a formal educational setting—and the numerous disruptions all of us have had to navigate—demand a different approach to make music education relevant to our students. We've always taught students to be resilient in our performing ensembles. Now, we need to help them be resilient at the individual level for the greater good of music education.
When performance is restricted, many music educators become concerned about what they will accomplish in their learning spaces. However, the pandemic has broadened our point of view beyond "just performing" and has provided the possibility of more equally aligning the remaining standards. This realignment could potentially allow students to individually fall in love with music in a new way. The challenge is how to build relationships with students when we're not able to meet in person.
Creating a Path to Performance
Social Emotional Learning (SEL) is a skill-based approach that can help navigate this transition by building students' self-awareness, self-management, social-awareness, relationship-management, and responsible decision-making skills (simplified to the three goals of: self, others, and decisions). SEL enables us to respond to challenges instead of just reacting to them. Music is inherently emotional. It makes us feel. Music is social. It has always been a rallying call for human beings. It is essential that music teachers capitalize on the connections between SEL and music. They are our secret weapons with the superpowers our students need—SEL and music education—now more than ever!
For SEL to be effective in teaching students the life skills needed to navigate their world after they leave our music classrooms, it must be embedded into curricular content. For us, as music educators, it must be musical. We must make SEL intentional and meaningful; it does not "just happen," and we cannot rely on the inherent fertile ground and potential that music education provides to teach our students these skills. If we are relying on music to latently teach these skills instead of music educators intentionally embedding SEL into their work, we will miss a great opportunity. SEL is not another box we need to check or another item we need to squeeze into our time with students. When done well, Musical SEL (MSEL) should feel like great music teaching. If it feels like SEL is distracting us from teaching music, then we are not doing it optimally nor maximizing the true power of music.
Unpacking Voice and Choice
Two central thoughts are at the heart of SEL when troubleshooting this concern. We need to honor our students' voices.
Maybe we haven't provided a space to amplify their voice and vision as much as we should have in the past. However, now we can ask students what they want out of their musical education. What can they bring to the table? What can they do to help our programs thrive? Then, we need to give them choices. Our students, while highly effective at working well within our programs, generally do not have a lot of choices. If we can start to unpack voice and choice, then our students will still have the ownership and pride we have come to develop as an entire ensemble. Students will be energized and inspired to contribute productively to our programs by building relationships in more individualized ways.
Aligning with the Standards
How can we use SEL to align our curricula with music standards (when performing is restricted) and allow students to have a voice and choice in our program? Consider these strategies:
Use the essential questions from the national music standards to form lessons, produce writing prompts for reflection, or facilitate significant discussions with students:
- How do musicians generate creative ideas?
- How do musicians make creative decisions?
- How do musicians improve the quality of their creative work?
- When is creative work ready to share?
Understand how students are responding to music during a global pandemic, a social justice movement, and an evolution in education. Their responses to music are often informed by analyzing the social, cultural, and historical context. Ask students to elaborate on the following prompts:
- Explain what music you have chosen to listen to or perform recently.
- Identify reasons for selecting this music.
- Interpret the expressive nature of this music.
- Identify the effect the music is having on you.
- Identify music that is the theme song for a social justice issue you are passionate about.
Make meaningful connections to real-life scenarios. Many authors, composers, and performers have admitted to experiencing writing/performance block during the pandemic. They have shared the reasons for this; how they have overcome it; and the motivation that has inspired them to write, compose, or perform again. Invite these artists into your learning space (face-to-face, virtually, or through a recording) to share their personal interests, experiences, ideas, and knowledge of creating, performing, and responding. Allow your students to ask questions to these artists. Then, ask your students how they can relate through their discipline, experiences, and daily life. (See _URL_ for more ideas to intermix SEL and the National Core Arts Standards.)
Embedding SEL into the Musical Curriculum
Darlene Machacon, an elementary music teacher and chorus director at Garden Grove Unified School District in Garden Grove, California, whose school began the academic year with 100 percent virtual instruction, says, "It is essential that SEL is not practiced just because it is the trend in education. We must be actively critical about how it is implemented with our specific students' cultures."
Machacon says teachers must aim to be culturally responsive and anti-racist in their approaches of SEL and that strategies should be implemented with what is best for students and families. "The challenges can be one's initial mindset," she says. "We prioritize what we want to make time for. The challenge is eliminating the mindset that our classrooms are only about music and nothing else."
Jonathan Grantham, director of bands at Amador Valley High School in Pleasanton, California, states, "It is critical that music educators understand that they are already building SEL bridges regularly in their classrooms. Allowing teachers to align what they are already doing through an intentional SEL lens will help to allay stresses that this is just another fad to follow or program to adopt."
Grantham, whose school also began the academic year virtually, reminds us that teachers are busy, and so a strong entry point for engagement is to start with what is already being done. "Remember to breathe," he says. Graham points out that we don't have to meet all the usual expectations and targets in our classrooms to make the time we have valuable. "Relationships will take longer to build virtually. Be patient with how much longer things take. It's really helpful to remember that we have more control than we think."
"Relationships will take longer to build virtually. Be patient with how much longer things take. It's really helpful to remember that we have more control than we think."—Jonathan Grantham
Bobby Olson, choir director at Round Lake High School in Round Lake, Illinois, says, "I've found | 1.83896 | openbmb/Ultra-FineWeb |
Thanks to a Canadian-built weather instrument, scientists have discovered snow falls from Martian clouds.
"Nothing like this view has ever been seen on Mars," said Jim Whiteway, an associate professor at York University in Toronto and lead scientist for Canada's contribution to the Phoenix mission. "We'll be looking for signs that the snow may even reach the ground."
Using a laser instrument called the lidar, scientists found clouds composed of ice crystals and some of those ice crystals became large enough to fall through the Martian atmosphere. The lidar emits pulses of laser light into the sky and then uses the data to examine dust and cloud particles in the atmosphere.
Prof. Whiteway referred to the snow as "diamond dust," which is also found in the Canadian Arctic, where extremely cold, dry weather patterns mimic conditions on Mars. | 2.351666 | m-a-p/FineFineWeb |
DB Table Selector
Source
This node takes a database connection as input and allows selecting a table or view based on the incoming database connection. The node outputs the incoming connection together with the query referencing the selected table or view which can be used in subsequent database nodes. In addition, this node allows a customized SELECT statement to narrow the results window for the next node(s).
Input Ports
1. Type: DB Session
DB Connection
Output Ports
1. Type: DB Data
DB Data referencing the selected table
Extension
This node is part of the extension
KNIME Database
v4.0.0
Short Link
Drag node into KNIME Analytics Platform | 1.159674 | EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.