text
stringlengths 307
28.2k
| __index_level_0__
int64 0
904
|
|---|---|
Dora Bas Rivka Silver O'H
The Sideways Ark
G-d told Moshe to go among the Jews and accept donations for building the Mishkan (Tabernacle). These donations could consist of gold, silver, copper, colored wool and other fabrics, skins, wood, gems, oil and spices. G-d would instruct Moshe in building the Mishkan, where His Presence would reside.
The first vessel G-d described was the Aron (Ark), a wooden box overlaid with gold. There were rings on the corners, through which poles were placed to carry the Aron; these poles were never to be removed. (The Ark is popularly depicted with the poles running along the longer sides but, really, the poles ran along the shorter ends. See Talmud Menachos 98a-b).
| 70
|
Acidosis is a condition in which there is excessive acid in the body fluids. It is the opposite of alkalosis (a condition in which there is excessive base in the body fluids).
Causes, incidence, and risk factors:
The kidneys and lungs maintain the balance (proper pH level) of chemicals called acids and bases in the body. Acidosis occurs when acid builds up or when bicarbonate (a base) is lost. Acidosis is classified as either respiratory acidosis or metabolic acidosis .
Respiratory acidosis develops when there is too much carbon dioxide (an acid) in the body. This type of acidosis is usually caused by a decreased ability to remove carbon dioxide from the body through effective breathing. Other names for respiratory acidosis are hypercapnic acidosis and carbon dioxide acidosis. Causes of respiratory acidosis include:
- Chest deformities, such as kyphosis
- Chest injuries
- Chest muscle weakness
- Chronic lung disease
- Overuse of sedative drugs
Metabolic acidosis develops when too much acid is produced or when the kidneys cannot remove enough acid from the body. There are several types of metabolic acidosis:
Diabetic acidosis (also called diabetic ketoacidosis and DKA) develops when substances called ketone bodies (which are acidic) build up during uncontrolled diabetes .
- Hyperchloremic acidosis results from excessive loss of sodium bicarbonate from the body, as can happen with severe diarrhea.
Lactic acidosis is a buildup of lactic acid . This can be caused by:
- Exercising vigorously for a very long time
- Liver failure
- Low blood sugar (hypoglycemia)
- Medications such as salicylates
- Prolonged lack of oxygen from shock, heart failure, or severe anemia
Other causes of metabolic acidosis include:
Signs and tests:
- Arterial or venous blood gas analysis
- Serum electrolytes
- Urine pH
An arterial blood gas analysis or serum electrolytes test, such as a basic metabolic panel, will confirm that acidosis is present and indicate whether it is metabolic acidosis or respiratory acidosis. Other tests may be needed to determine the cause of the acidosis.
Treatment depends on the cause. See the specific types of acidosis.
Acidosis can be dangerous if untreated. Many cases respond well to treatment.
See the specific types of acidosis.
Calling your health care provider:
Although there are several types of acidosis, all will cause symptoms that require treatment by your health care provider.
Prevention depends on the cause of the acidosis. Normally, people with healthy kidneys and lungs do not experience significant acidosis.
Seifter JL. Acid-base disorders. In: Goldman L, Ausiello D, eds. Cecil Medicine. 23rd ed. Philadelphia, Pa: Saunders Elsevier; 2007:chap 119.
|Review Date: 11/15/2009|
Reviewed By: David C. Dugdale, III, MD, Professor of Medicine, Division of General Medicine, Department of Medicine, University of Washington School of Medicine. Also reviewed by David Zieve, MD, MHA, Medical Director, A.D.A.M., Inc.
The information provided herein should not be used during any medical emergency or for the diagnosis or treatment of any medical condition. A licensed medical professional should be consulted for diagnosis and treatment of any and all medical conditions. Call 911 for all medical emergencies. Links to other sites are provided for information only -- they do not constitute endorsements of those other sites. © 1997-
A.D.A.M., Inc. Any duplication or distribution of the information contained herein is strictly prohibited.
| 457
|
Vanity Top Finishes
Vanity Top Composition
Geologically speaking, rocks are classified into 3 main categories: igneous, sedimentary, and metamorphic. Our sinks are made of all natural stones including granite, travertine, and cream marfil. Granite is a type of igneous rock, travertine is sedimentary while cream marfil is metamorphic.
Cream Marfil is a type of marble, which is a metamorphic rock resulting from the metamorphism of limestone. Limestone is a sedimentary rock, formed mainly from the accumulation of organic remains (bones and shells of marine microorganisms and coral) from millions of years ago. The calcium in the marine remains combined with carbon dioxide in the water in turn forms calcium carbonate, which is the basic mineral structure of all limestone. When subjected to heat and pressure, the original limestone experiences a process of complete recrystallization (metamorphism), forming what we know as marble. The characteristic swirls and veins of many colored marble varieties, for example, cream marfil, are usually due to various mineral impurities. Cream marfil is formed with a medium density with pores.
Travertine is a variety of limestone, a kind of sedimentary rock, formed of massive calcium carbonate from deposition by rivers and springs, especially hot bubbly mineral rich springs. When hot water passes through limestone beds in springs or rivers, it dissolves the limestone, taking the calcium carbonate from the limestone to suspension as well as taking that solution to the surface. If enough time comes about, water evaporates and calcium carbonate is crystallized, forming what we know as travertine stone. Travertine is characterized by pores and pitted holes in the surface and takes a good polish. It is usually hard and semicrystalline. It is often beautifully colored (from ivory to golden brown) and banded as a result of the presence of iron compounds or other (e.g., organic) impurities.
Travertine is mined extensively in Italy; in the U.S., Yellowstone Mammoth Hot Springs are actively depositing travertine. It also occurs in limestone caves.
Granite is a very common type of intrusive igneous rock, mainly composed of three minerals: feldspar, quartz, and mica, with the first being the major ingredient. Granite is formed when liquid magma?molten rock material?cools beneath the earth?s crust. Due to the extreme pressure within the center of the earth and the absence of atmosphere, granite is formed very densely with no pores and has a coarse-grained structure. It is hard, firm and durable.
| 218
|
Tim the Plumber wrote:
The idea that you can predict the climate based on it's temperature behaviour between 1970 and 1998 is silly. Just as the statement that the absence of warming since 1998 and 2011 cannot utterly disproove AGW the rise between 1970 and 1998 cannot 100% proove the theory that CO2 is a significant greenhouse gas at the levels we have today.
Nobody is trying to predict temperatures based on historically temperatures over the last 40 or so. The predictions are based on our understanding of earth's climate over hundreds of millions of years and particularly the last 4 million years of recurring ice ages. The climate while complicated has to obey some very simple basic physical rules that is the energy coming in has over time to equal the energy going out. Change that simple relationship in some way and the temperature will change change until such time as the equation is back in balance. It is certain that greenhouse gases reduce the amount of energy that leaves the earth.
Northern Europe is having a wet and cool summer, it's just America which is having a long, hot and dry one.
No my original statement is correct
According to NOAA:-http://www.ncdc.noaa.gov/sotc/global/2012/6
The Northern Hemisphere land and ocean average surface temperature for June 2012 was the all-time warmest June on record, at 1.30°C (2.34°F) above average.
The Northern Hemisphere average land temperature, where the majority of Earth's land is located, was record warmest for June. This makes three months in a row — April, May, and June — in which record-high monthly land temperature records were set. Most areas experienced much higher-than-average monthly temperatures, including most of North America and Eurasia, and northern Africa. Only northern and western Europe, and the northwestern United States were notably cooler than average.
Tim the Plumber wrote:
When thinking about such climatic events it is vital to have a sense of proportion and not see a tiny change over 3 decades as a reason to think that there will be a drastic "exponential" continuation of this.
The temperatures changes over the last 3 decades simply confirms our basic understanding of the climate.
It is akin to having a graph of the speed of your car traveling along a highway. When the speed is 55mph your pasenger is happy, when the graph plots up to 57 mph the pasenger panics because the car is about to accelerate untill the machine disintergrates at the sound barrier. When the graph shows a slowing to 53mph the panic is of the sudden stopping of the car and the trafic behind slamming into the back of the car.
No it is more being in a car where the cruise control is stuck and the speed just keeps increasing.
Climate varies about quiote a lot.
Because we live fairly short lives we do not rember the droughts of the dust bowl. We do not rember the medevil warm period. We do not rember the frost fairs on the frozen Thames.
This is why we maintain weather data which shows that the current conditions are both worst and different.
We should take these dire warnings with a big pinch of salt.
Dire warnings should be assessed on the merits and action taken if necessary but never ignored
The sea level rose by 18cm last centuary, how many cities flooded because of this? This centuary looks like it could be twice as bad, maybe.
So as long as we split the sea level rises into 18 cm chunks it will be no problem ?
I am reminded of camels transporting straw.
| 250
|
Tips for helping children behave
(especially in public)
I thought it was time to write down some of the tips I’ve acquired that concern children and behavior, as I’ve learned what I have about the brain and how it works. What I learned is that while you are thinking about something, you feel as you do about it. If you want to change how you feel, changing your thoughts helps us do that immediately. Just as I say, “ Imagine the tip of your left pinky finger… where the nail meets the skin…” Do you think of it? Did you think of your pinky because I mentioned it? You did that because that is how quickly our brain responds.
Tip 1. Keep a picture or small item (from a special time or event) that represents something special to the child with you.
Yes. Helping a child regain composure during a meltdown may be painful and embarrassing, not to mention the discomfort your child perceives at that time. There seems to be great debate over how to handle a situation like a meltdown like let them ride it out or leave immediately or yell or whatever… if you have tried these techniques I’d like you to pause and ask yourself how helpful your response was in the moment.
Keeping a picture of a grandparent or a party or a vacation or an item or toy that they really want for their birthday or special holiday… (Big breath…) you get the picture… it helps take your child’s thoughts to a more pleasant place. You can make this even more effective by asking them questions about it, even if you know the answers ; )
Ask questions like, “Hey remember this? Where/who is this again? Do you remember that thing in the picture…? What do you think happened right before this? What is your favorite thing about this? Can you make up a sentence/song/rhyme/or draw a picture about the picture? Can you spell something in the picture?
These questions will take their focus to something better. Once they’ve calmed down and appear to be past the issue, ask them what happened and explain why that behavior isn’t helpful or necessary.
Tip 2. A simple game of ‘I spy’ with extra OMG
This tip is great if you find yourself out and are caught empty handed without anything handy to keep your kids busy. Look around you and spot something ANYTHING that either you don’t see often or haven’t seen in awhile (not a person) and get excited while smiling and saying, “OMG you are never going to find what I am looking at! It is soooo _____________” This usually leads to a game of I spy which can be changed as your child develops I spy colors… I spy words that start with ___... I spy numbers …
Tip 3. Text Message back and forth
I noticed a long time ago how technology has changed the interpersonal dynamic. Being more of a “find the solution girl”, I established some rules like no cell phone at the table for meals. Please know my daughter was 4 when I did this knowing that when she is grown she will have a phone and I probably will want to use our dining times to connect. I figured our meals are relatively short and whatever needed me could wait the 30 minutes. Also, it was a perfect opportunity to demonstrate the behavior I hope to create.
This is what led me to texting with my child. I remember when she was beginning to read that I looked for every practical experience for her to do so… signs, menus, airport terminals… everything. But sometimes there is nothing to read and nowhere to go as you get stuck waiting in line or for an appointment or for whatever presents a time where you can text and you need to keep kids busy and have fun… text them. Hand them your phone and say here… and let them read your message for them. Then let them respond to you via text. Hand the phone back and forth…after a few texts you may be happy to learn what you do… and you will both have fun while doing it.
Tip 4. Start them on a story.
This is one I use all the time and changes just as much as I use it. I simply start a story using something that I see for example… “Once there was this really cool girl who sat down to write some really helpful stuff for parents…” then they have to look around and continue the story using something they see. Example “But that girl decided to step away from her desk and go play with the fairies living in her backyard…”
Start stories about the cars you see driving or the foods you see in the market or the stars in the sky… about the waves in the ocean and the mermaids that live deep below.
When you run out of time for your part and you’ve got them interested, tell them to draw a picture of it or write a story about it.
How smart and well behaved your children are now. Thanks for reading. You do make a difference in your child’s life. Much Love.
| 39
|
Generalized anxiety disorder (GAD) is an anxiety disorder that is characterized by excessive, uncontrollable and often irrational worry about everyday things that is disproportionate to the actual source of worry. This excessive worry often interferes with daily functioning, as individuals suffering GAD typically anticipate disaster, and are overly concerned about everyday matters such as health issues, money, death, family problems, friend problems, relationship problems or work difficulties. Physical symptoms can include fatigue, fidgeting, headaches, nausea, numbness in hands and feet, muscle tension, muscle aches, difficulty swallowing, bouts of difficulty breathing, difficulty concentrating, trembling, twitching, irritability, agitation, sweating, restlessness, insomnia, hot flashes, rashes, and an inability to fully control the anxiety. These symptoms must be consistent and on-going, persisting at least 6 months, for a formal diagnosis of GAD to be introduced. Generalized anxiety disorder is estimated to occur in 5% of the general population. Women are generally more affected than men .
Page 1 of 2
Dating Advice For Men
| 868
|
Since many ancestors of Americans were foreign born, naturalization records
are another a source of genealogical information that you might want to
investigate. Naturalization is the process through which a foreign born
person becomes a citizen of the United States and is eligible to vote.
Not all immigrants became citizens as it is not required. Many obtained
their citizenship because of pride in their new country and a desire to
participate in democratic elections, a privilege perhaps not accorded
to them in their country of birth. Others became citizens for more materialistic
reasons, such as the right to acquire free land through homesteading.
During times of war, there was often hostility towards people from the
enemy country and immigrants may have obtained citizenship to show their
loyalty to the U.S., especially if they had children serving in the U.S.
Was Your Ancestor Naturalized?
Before beginning a search for a naturalization record, it may save hours
of futile research if you try to determine if there is evidence that the
individual you are researching did become a citizen. There are several
ways to do this:
Even with the above information, keep in mind the following caveats:
- Location of Birth Was the person foreign born? Usually
there's no need to be naturalized if born in the U.S.
- Census The 1900 and 1910 censuses ask if a person is
naturalized and 1920 further asks the year of naturalization. Indirectly,
the 1820 and 1830 census provide a clue with the question "number of
foreigners in each household not naturalized."
- Homesteading Land The person had to have initiated the
naturalization process to be eligible for free land through homesteading.
- Voter Registration Lists Is he/she listed as a voter?
- Occupation Did this person hold a job that required
- Not all foreign born individuals applied for citizenship and a child
born abroad is still a U.S. citizen if his/her parents are. During much
of our history, the wife and children automatically became citizens
when the husband/father took out citizenship papers.
- Naturalization was one of many census questions. The person who provided
the answer may not have known in fact if someone else had been naturalized.
An individual may have said yes because he felt it was the right thing
to say or he intended to begin the process.
- A Declaration of Intent, not final papers, was all that was required
- Not everyone who became a citizen registered to vote. Also, some states
allowed people who had filed a Declaration of Intent to vote even if
they had not received their final papers.
What Is the Procedure?
By now you might be getting the idea that naturalization documents are
not necessarily as easy to use as some other records, such as the census.
Generally, for most of our history there are two rules that apply to naturalizations:
- It was a legal process handled through the courts.
- It was usually a two-part procedure, the first being a Declaration
of Intent indicating that the person intended to become a citizen (voluntary
after 1952). This may have included as part of the document or as a
separate certificate or record information on the individual's date
and place of arrival into the United States. After a required period
of residency (five years, with some exceptions) the individual would
then file a Petition for Naturalization and, if granted, would receive
a Certificate of Naturalization. Both or either the Declaration and/or
the Petition may contain valuable genealogical information.
The procedures and requirements differed greatly depending on the location
and the time period. The first important information the researcher needs
to establish is whether the naturalization was before 1906 or afterwards.
In 1906 the naturalization process was simplified and taken over by the
federal government. It is much easier to find out where to look and what
to expect if it took place after 1906.
Where are Records Located?
Prior to 1906, naturalization could take place in any court having common
law jurisdiction. The court could be federal, state, or local and be called
by many names circuit, supreme, civil, equity, district, common
pleas, chancery, superior. In some cases a municipal, police, criminal,
or probate court did not actually have the right to handle naturalization
but they issued certificates anyway. Prior to 1905, over 5,000 courts
had been handling naturalization. By 1908 that number was reduced to just
over 2,000 courts and the Department of Labor began issuing A Directory
of Courts Having Jurisdiction in Naturalization Proceedings. This
directory, available on microfilm through the Family History Library,
can help you determine which court your ancestor may have used. Naturalizations
can now be handled in either federal or local courts. Since 1929, most
naturalizations have been at federal courts, but earlier records are more
likely to be at a local court because it was closer to the individual.
Prior to 1906, the biggest problem confronting a researcher is where
to find the record. The two procedures did not have to take place in the
same court so the immigrant could have filed a Declaration soon after
his arrival in New York, or perhaps he lived in Ohio for a while and filed
his Declaration there, hoping to qualify for free land. Then, after settling
on land in South Dakota, he may have submitted his Petition to a local
court. The Family History Library has microfilm copies of many pre-1930
records. If your ancestor lived in an urban area, there are many rolls
of films relating to Chicago (1871-1930), New York (1792-1906), Philadelphia
(1793-1911) and New England (1791-1906).
The good news is that copies of all naturalization records from 1906
to 1956 are at the Immigration and Naturalization Service in Washington,
DC. This does not guarantee success though. Since you are dealing with
a government agency, be prepared for a long wait. I had a copy of one
certificate of citizenship which gave the court, location, date, and name
of the immigrant, but the INS was never able to locate the file. You may
also be able to obtain copies of the file from the court, but some courts
will refer you to INS in Washington.
Naturalizations after 1956 are kept at the local INS office. Some records
are being transferred to National Archives branches or state archives.
See their page "Naturalization
Records" for information about records at the National Archives.
What Do the Records Contain?
In 1906, the process was standardized and uniform forms were issued.
The forms have been revised periodically, but generally contain at least
the following information:
- Declaration of Intention The court date and location;
the individual's name, age, occupation, personal description, birth
date and location, and residence; their date and vessel of arrival and
last foreign residence. From 1929 to 1941, it asked for the spouse's
name, marriage date and place, and birth information, plus names, dates,
and places of birth and residence of each child. It also includes a
picture of the applicant. After 1941, it requests the spouse's name
(no details on birth) and doesn't mention children. After 1929, the
last foreign residence is omitted. A separate Certificate of Arrival
giving details of arrival was required for arrivals after 1906, with
- Petition for Naturalization The court date and location;
name, residence, occupation, birth date and place; immigration departure
date and place; U.S. arrival port, date, and ship; date and place of
Declaration of Intention; spouse's name, birth date and place; children's
names, dates and places of birth; residence, witnesses, and oath of
allegiance. From 1929 to 1941 it also asked race, marriage date and
place, date of spouse's entry into the U.S. and naturalization information,
last foreign residence, and name used on arrival. After 1941, a personal
description was added, as well as details of any trips longer than six
months out of the U.S.
- The actual certificate This is the document given to
the new citizen and the one a researcher is most likely to find in old
family papers. It contains little information: court, date. and name
of new citizen. It may contain other information, but the Declaration
and Petition are the papers the researcher should try to locate.
Prior to 1906
There is no predicting what you might find in naturalization papers prior
to 1906. Until 1828, the immigrant had to report to a court to register.
This report was supposed to contain information on the birthplace, age,
and nationality. These alien registry books were separate volumes in many
areas, especially in the northeast. The registry may be found in later
records combined with the Declaration. After 1911, the immigrant was issued
a certificate of arrival.
A Declaration of Intention was usually required, again with exceptions.
It may contain little more than the name of the immigrant, but may also
have some of the details incorporated in the post-1906 form described
above. These are also called "first papers."
Early Petitions are part of the court record and may even be recorded
in separate ledgers called "second papers" or "final record." Information
varies greatly. Certificates of Naturalization were given to the new citizen.
The information was recorded but duplicates of the certificate were not
kept on file.
Spouses and children may derive their citizenship from their husband/father
and not have to go through the procedure themselves. Up until 1922, a
foreign born woman who married an American citizen became naturalized
upon marriage or, if her husband was foreign born, when he became a citizen.
No separate filings were required. Prior to 1906, they usually were not
even mentioned in the husband's petition.
After 1922, a woman had to be naturalized on her own. However, from 1907
to 1922, if a woman married an unnaturalized alien, she took his citizenship.
This created one particularly bizarre situation for a woman who was born
in Poland in September 1901. In November of that same year she came to
the U.S. with her parents. Her father obtained citizenship in 1906 and
she automatically became a citizen as well. In 1918 she married a man
who had immigrated from Russia in 1913, but was not yet a citizen. She
lost her citizenship because of this rule. In November 1922, her husband
became a citizen. This did not help her because on September 22, 1922,
the law was changed to say that any alien woman who married an American
does not become a U.S. citizen automatically. She applied on her own and
again became a U.S. citizen in 1942!
Children under the age of 21 automatically become citizens by the naturalization
of a parent. However, there are many exceptions to this law regarding
residence, whether or not a Declaration is required, what happens if the
parent dies or becomes insane, adopted children, illegitimate children,
step-children, and children born abroad.
Obtaining citizenship generally has been made easier for aliens who served
in the U.S. military. Filing of the Declaration of Intention was often
not required and the period of residency eliminated or reduced. However,
in 1894 the law was changed and during times of peace no one (except Indians)
could serve in the military unless he or she was a U.S. citizen or had
filed a Declaration of Intention. Aliens were allowed to serve during
times of war and to become naturalized.
Some states had laws forbidding aliens from owning land unless they had
filed a declaration. Homesteaders were able to qualify for free public
land after filing the declaration. The National Archives has homestead
records prior to May 1, 1908 and Bureau of Land Management after that
date. BLM can be accessed at http://www.blm.gov/nhp/index.htm.
Obstacles to Research
Besides identifying the court (or courts) that handled the various steps
in the procedure, there are other pitfalls. Some immigrants filed the
Declaration, perhaps for homesteading, but did not follow through with
the final papers. If they could vote and obtain land with the Declaration
only, they had no need to complete the process. Others were allowed to
skip the declaration and only had to file the final petition. In addition,
fraud occurred on a large scale. Thousands of fraudulent certificates
were issued in 1868 in New York because votes were needed in an election.
These certificates had no court records documenting the citizenship. If
you cannot locate the naturalization record in the court where it was
supposed to have occurred, your ancestor may have had a fraudulent certificate.
For further information, see the National Archives and Records Administration
Records" page; the LDS Research Outline on the U.S. (p. 38-41)
and the excellent 43-page booklet American Naturalization Processes
and Procedures 1790-1985 by John J. Newman (Indianapolis: Family History
Section, Indiana Historical Society 1985).
| 486
|
The Ancient Forests of North America are extremely diverse. They include the boreal forest belt stretching between Newfoundland and Alaska, the coastal temperate rainforest of Alaska and Western Canada, and the myriad of residual pockets of temperate forest surviving in more remote regions.
Together, these forests store huge amounts of carbon, helping tostabilise climate change. They also provide a refuge for large mammalssuch as the grizzly bear, puma and grey wolf, which once ranged widelyacross the continent.
In Canada it is estimated that ancient forest provides habitat forabout two-thirds of the country's 140,000 species of plants, animalsand microorganisms. Many of these species are yet to be studied byscience.
The Ancient Forests of North America also provide livelihoods forthousands of indigenous people, such as the Eyak and Chugach people ofSouthcentral Alaska, and the Hupa and Yurok of Northern California.
Of Canada's one million indigenous people (First Nation, Inuit andMétis), almost 80 percent live in reserves and communities in boreal ortemperate forests, where historically the forest provided their foodand shelter, and shaped their way of life.
Through the Trees - The truth behind logging in Canada (PDF)
On the Greenpeace Canada website:
Interactive map of Canada's Borel forest (Flash)
Fun animation that graphically illustrates the problem (Flash)
Defending America's Ancient Forests
| 110
|
Delahoyde & Hughes
A 6th-century BCE Greek philosopher and mathematician, originally from Samos (an island off the coast of Asia Minor settled by the Greeks), and born about 570 BCE, he left home around 530 BCE to escape the tyranny of the autocrat Polycrates. He lived in southern Italy, influencing city politics until the turn of the century when the citizens revolted against his influence and forced him to settle in Metapontum instead. Followers venerated him and they formed some sort of religious order. Although he did not set down his ideas in written form, Pythagorean centers sprang up throughout the Greek mainland during the 5th century BCE, including in Thebes and Athens, so he certainly influenced Socrates and therefore Plato. Legends include an instance of a superhuman voice wishing Pythagoras good morning as he was crossing the river Casas, and his being able to appear in both Croton and Metapontum on the same day at the same hour.
Pythagoras' concepts included the mathematical order of the cosmos, and he may have been led to this assessment from the mathematical order of music (consonants of octaves, fifths, and fourths being produced by simple ratios in the lengths of the vibrating strings). We know "the square of the hypotenuse of a right triangle is equal to the sum of the squares of the sides containing the right angle." But Pythagoras had an all-inclusive vision, which also included the "music of the spheres" (the sound that no doubt had to be produced by the planets encircling earth).
Pythagoras believed in reincarnation and claimed to remember previous incarnations. [Transmigration of souls is not a Greek leaning, so one school of thought says Pythagoras travelled east beyond Egypt and came back with the notion (but they say this of Jesus too).] A later report claims he told followers that he has once been Aethalides, a son of Hermes, who allowed him one wish excluding immortality. He wished to remember what happened to him, alive and dead. One of his remembered incarnations was as Euphorbus, who was wounded by Menelaus. Afterwards, he became Hermotimus, who in a temple of Apollo identified the shield of Menelaus (dedicated to Apollo when he sailed back from Troy). His next incarnation was as Pyrrhus the Delian fisherman, and then Pythagoras.There is a famous story that he once stopped an animal from being beaten because he insisted he recognized the voice of a dead friend. (I wonder if that might not have been merely a humane device to stop the beating of an animal.) (Asimov 535)Due partly to his belief in metempsychosis, he opposed the taking of life, the eating of flesh, and association with those who benefit by the slaughter business. He supposedly coined the term "Philosophy" first as a word to signify the love and pursuit of wisdom, which helps the soul bring itself into attunement with the cosmos.
- Do you agree with the philosophy of Pythagoras?Without transmigration of souls as a foundation, Metamorphoses still works because of its notions of the earth being generous, its commentary on pollution, and in general its environmentalism which seems almost modern in essence.
- But we don't sacrifice to the gods anymore.Uh huh, well, in celebration of the Pilgrims infiltrating North America we butcher countless turkeys; and there is a trend of celebrating the birth and rebirth of our Lord and Saviour with roast pigs; and there's this sanctimony about grilling ground flesh in honor of "the boys" who died in the big war; and we certainly sacrifice millions of animals now to the "god" Science. So what we consider "religion" may have changed more than the nature of the practices (or the excuses).
- The final connection to all the previous books?Metamorphoses celebrates the joy of change, of the natural world being animated, and in the ongoing process of wondrous creation. The stories are intended to inspire respect for the wonders of nature (however based in speciesistic assumptions that stories must be about humans). In this ongoing creation and active context, we have responsibilities too -- according to Pythagoras. The world, its flora, and its fauna: all have past lives and life stories worthy of some respect.
Ferguson, Kitty. The Music of Pythagoras. NY: Walker & Co., 2008.
Orpheus: Roman Mythology
| 72
|
Cheating, Plagiarism, and Academic Dishonesty
Students must be honest and responsible in the completion of their academic work. While parents are encouraged to assist and guide their children, they must allow their children to do their own work. Students must refrain from:
- Copying another student's work or homework
- Plagiarism (submitting another's work as one's own)
Teachers who suspect that a student may have been academically dishonest will report their concern to the Administration. Consequences may range from receiving a zero (no credit) on the assignment to required withdrawal from school.
| 86
|
Australian Bureau of Statistics
2901.0 - Census Dictionary, 2001
Previous ISSUE Released at 11:30 AM (CANBERRA TIME) 24/04/2001
|Page tools: Print Page RSS Search this Product|
This variable identifies holders of Australian citizenship.
Citizenship data are used to obtain information on the tendency of different migrant groups to take out citizenship and to measure the size of groups eligible to vote. The data are useful cross-classified with birthplace, year of arrival in Australia and age data.
This page last updated 27 July 2006
Unless otherwise noted, content on this website is licensed under a Creative Commons Attribution 2.5 Australia Licence together with any terms, conditions and exclusions as set out in the website Copyright notice. For permission to do anything beyond the scope of this licence and copyright terms contact us.
| 696
|
Small businesses are starting smaller and staying smaller. That’s the conclusion of a study released by entrepreneurship advocate Kauffman Foundation.
“Starting Smaller; Staying Smaller: America’s Slow Leak in Job Creation” said its analysis of government data shows that since the middle of the last decade and perhaps longer, the growth path and survival rate of new businesses means they are generating fewer new jobs.
The businesses that started in 2009, for example, are on course to create 1 million fewer jobs in the next decade than historical averages would predict, the study says.
The trend started long before the Great Recession and is likely to continue into the future, the study says.
That trend signals an important shift for the U.S. economy, which needs to create a minimum of 125,000 new jobs a month merely to keep up with adult population growth.
The Kauffman research distinguishes between numbers of businesses and numbers of businesses that have employees.
Citing data from the U.S. Census Bureau, the study found that the number of new employer businesses has fallen 27% since 2006. The total number of startups (employer businesses and self-employed) has increased since the recession, according to the Kauffman Index of Entrepreneurial Activity. But “firms that support only the self-employed owner do not scale to generate the new jobs needed to support overall economic growth.”
The Bureau of Labor Statistics says startups created about 4.65 million new jobs annually from 1997 to 2000 compared to 2.5 million in 2010, the study notes. The Census sets the drop off from 7 million jobs in 2006 to 4.5 million in 2009.
Part of the decline is that about 27% fewer businesses with employees are being started, the study says.
A second significant factor is that new small-business survival has declined. Before the recession, 45% to 50% of new businesses survived five years, the study says. Now fewer than 45% of businesses started in 2004 were still in business in 2009.
And a third factor is that new businesses are hiring fewer employees than in the past. In the 1990s, new establishments opened with an average of 7.5 jobs. By 2010, the number of jobs at the average startup was 4.9 jobs, according to Bureau of Labor Statistics.
And those businesses that survive never accelerate their hiring to make up for their smaller initial staffs, the study says. So over their lifetimes, these businesses created fewer jobs.
The study does not address why businesses are making do with fewer workers. Are they using technology instead of human labor? Are they doing work that requires fewer employees? Do minimum wage laws and health insurance mandates affect staffing levels?
The study concludes:
“In many cases, companies or individuals that once would have been hired as employees of a business now are performing the work on a temporary basis as contractors through other professional service organizations or under their own self-employment contracts. … No matter how laudable their individual efforts, these sole proprietors… are not likely ever to be major employers.
“The clear challenge for the U.S. economy instead is to start more employer businesses, ensure that they are starting larger, and nurture their growth.”
The study doesn’t suggest how.
Click here to read the entire report.
Want the latest on small business? Text OCRSMALLBIZ to 56654 to get free news alerts for small businesses.
Other business stories…
| 448
|
Balmoral Castle is in Aberdeenshire – and is usually known as The Queen’s private residence in Scotland – while the rest of the family enjoy the estate with their grandparents.
|Balmoral Castle in 1854 painted|
by Queen Victoria during its
Queen Victoria had, in 1848, found the house “small but pretty”, and recorded in her diary that: “All seemed to breathe freedom and peace, and to make one forget the world and its sad turmoils”.
Prince Albert, Queen Victoria’s Prince Consort, bought it in 1852 and in the following years had it rebuilt in white granite as a castellated mansion in Scottish baronial style.
The grounds were laid out by Prince Albert in the 1850s and planted out as parterre gardens with rare conifers, while plantations were established around the house.
|South front of Balmoral Castle|
The couple took a great interest in their staff setting up a lending library; while they also encouraged new ideas ... developed a model dairy, which was completed by the Queen after her husband’s death in 1861.
Near the Castle the Queen created her cottage garden around a private house, known as the Garden Cottage, while later in the 1920s Queen Mary, George V’s wife, altered it to include a sunken garden.
|Edward VII relaxing at Balmoral|
photographed by his wife, Alexandra
In our present Queen’s reign (1952 - ....) the Duke of Edinburgh, her husband, has improved the gardens – redesigning the herbaceous borders, planting up a shrubbery around the river and adding a water garden near Victoria’s Garden Cottage.
|North-West corner of Balmoral Castle|
Today it is a working estate – deer stalking, grouse shoots, forestry and farming are the main land uses, while there are tours to be booked, and cottages to rent.
The Castle, gardens and exhibitions (especially in this Diamond Jubilee Year) are open to the public from April to July.
Queen Victoria (1837 - 1901) in her journals described Balmoral as “my dear paradise in the Highlands” ...
That is B for Balmoral Castle ... a Scottish baronial castellated mansion built for Queen Victoria in the 1850s; now managed by the Balmoral Estates ... part of the ABC series Aspects of British Castles
By the way I meant to add yesterday - that Bob Scotney is also doing Castles - his A was for Amberley - should you wish to visit ...
Positive Letters Inspirational Stories
| 174
|
The length of amyloid fibrils found in diseases such as Alzheimer and Parkinson appears to play a role in the degree of their toxicity, according to researchers at the University of Leeds. Their findings are published in The Journal of Biological Chemistry in a paper titled “Fibril Fragmentation Enhances Amyloid Cytotoxicity.”
Sheena Radford, Ph.D., and colleagues systematically analyzed the effects of fragmentation on three of the 30 or so proteins that form amyloid in human diseases. Their results showed that in addition to the expected relationship between fragmentation and the ability to seed, the length of fibrils also correlated with their ability to disrupt membranes and reduce cell viability. This was evident even when there were no other changes in molecular architecture.
Co-author, Eric Hewitt, Ph.D., says that while the findings provide scientists with unexpected new insights for the development of therapeutics against amyloid deposit-related diseases, the next stage of research will involve looking at a greater numbers of the proteins that form amyloid fibrils. “We anticipate that when we look at amyloid fibers formed from other proteins, they may well follow the same rules.”
“It may be that because they’re smaller it’s easier for them to infiltrate cells,” Dr. Hewitt suggests. “We’ve observed them killing cells, but we’re not sure yet exactly how they do it. Nor do we know whether these short fibers form naturally when amyloid fibers assemble or whether some molecular process makes them disassemble or fragment into shorter fibers. These are our next big challenges.”
| 425
|
- Yes, this is a good time to plant native grass seed in the ground. You may have to supplement with irrigation if the rains stop before the seeds have germinated and made good root growth.
- Which grasses should I plant? The wonderful thing about California is that we have so many different ecosystems; the challenging thing about California is that we have so many different ecosystems. It’s impossible for us to know definitively which particular bunchgrasses used to grow or may still grow at your particular site, but to make the best guesses possible, we recommend the following:
- Bestcase scenario is to have bunchgrasses already on the site that you can augment through proper mowing or grazing techniques.
- Next best is to have a nearby site with native bunchgrasses and similar elevation, aspect, and soils, that you can use as a model.
- After that, go to sources such as our pamphlet Distribution of Native Grasses of California, by Alan Beetle, $7.50.
- Also reference local floras of your area, available through the California Native Plant Society.
Container growing: We grow seedlings in pots throughout the season, but ideal planning for growing your own plants in pots is to sow six months before you want to put them in the ground. Though restorationists frequently use plugs and liners (long narrow containers), and they may be required for large areas, we prefer growing them the horticultural way: first in flats, then transplanting into 4" pots, and when they are sturdy little plants, into the ground. Our thinking is that since they are not tap-rooted but fibrous-rooted (one of their main advantages as far as deep erosion control is concerned) square 4" pots suit them, and so far our experiences have borne this out.
In future newsletters, we will be reporting on the experiences and opinions of Marin ranchers Peggy Rathmann and John Wick, who are working with UC Berkeley researcher Wendy Silver on a study of carbon sequestration and bunchgrasses. So far, it’s very promising. But more on that later. For now, I’ll end with a quote from Peggy, who grows, eats, nurtures, lives, and sleeps bunchgrasses, for the health of their land and the benefit of their cows.
“It takes a while. But it’s so worth it.”
| 350
|
Antonio de Herrera y Tordesillas
A Spanish historian; born at Cuellar, in the province of Segovia, in 1559; died at Madrid, 27 March, 1625. He was a great-grandson of the Tordesillas who was put to death by the Comuneros at Seville. He studied in Spain and Italy, and became secretary to Vespasiano Gonzaga, a brother of the Duke of Mantua, who was afterwards Viceroy of Navarre and Valencia, and who recommended him to Philip II in the last year of that monarch's reign. Philip appointed him grand historiographer ( cronista mayor ) of America and Castile, and he filled that office during part of his royal patron's reign, the whole reign of Philip III, and the beginning of that of Philip IV. At his death his body was conveyed to Cuellar, and interred in the church of Santa Marina, where his tomb is still to be seen.
His most famous work is the "Historia General de los Hechos de los Castellanos en las Islas y Tierra Firme del Mar Océano" (General History of the deeds of the Castilians on the Islands and Mainland of the Ocean Sea), divided into eight periods of ten years each, and comprising all the years from 1472 to 1554. This work was printed at Madrid in 1601; reprinted by Juan de la Cuesta in 1615; revised and augmented by Andrés González and published at Madrid by Nicolas Rodríguez in 1726, and at Antwerp, by Juan Bautista Verdussen, in 1728. Worthy of note is the "Description of the West Indies", in the first volume of his work, which was translated into Latin and published at Amsterdam, by Gaspar Barleo, in 1622, a French version being published at Paris in the same year. In 1660 there appeared a French translation of the first three decades of his "Historia" by Nicolás de la Corte. In writing his great work Tordesillas made use of all the public archives, having access to documents of every kind. It is evident in his writings that he had to deal with a large number of historical manuscripts, and contented himself with relating events as he found them recorded. A great part of his work is more or less a transcript of the History of the Indies left by the famous Bishop Bartolomé de las Casas , though expurgated of wellnigh everything unfavourable to the settlers. A painstaking and conscientious investigator for the most part, his style does not correspond to his other admirable qualifications. He was a learned and judicious man, though, particularly in the later decades, somewhat prone to overpraise the conquerors and their exploits.
In addition to that already mentioned, his most important works are: "A General History of the World during the time of Philip II from the year 1559 to the King's death"; "Events in Scotland and England during the forty-four years of the lifetime of Mary Stuart, Queen of Scotland" (Historia de lo sucedido en Escocia é Inglaterra en los cuarenta y cuatro años que vivió Maria Estuardo Reina de Escocia); Five books of the history of Portugal and the conquests of the Azores in the years 1582, 1583; "History of events in France from 1585 to 1594" (a work published in Madrid in 1598, but suppressed by command of the king); "A Treatise, Relation, and Historical Discourse on the Disturbances in Aragon in the years 1591 and 1592" (Tratado, relación y discurso histórico de los movimientos en Aragon en los años de 1591 y 1592); "Commentary on the deeds of the Spaniards, French, and Venetians in Italy, and of other Republics, Potentates, famous Italian Princes and Captains, from 1281 to 1559"; "Chronicle of the Turks, following chiefly that written by Juan Maria Vicentino, chronicler to Mahomet, Bajazet, and Suleiman, their lords" (unpublished); various works translated from the French and Italian, preserved in the National Library at Madrid.
More Catholic Encyclopedia
Browse Encyclopedia by Alphabet
The Catholic Encyclopedia is the most comprehensive resource on Catholic teaching, history, and information ever gathered in all of human history. This easy-to-search online version was originally printed in fifteen hardcopy volumes.
Designed to present its readers with the full body of Catholic teaching, the Encyclopedia contains not only precise statements of what the Church has defined, but also an impartial record of different views of acknowledged authority on all disputed questions, national, political or factional. In the determination of the truth the most recent and acknowledged scientific methods are employed, and the results of the latest research in theology, philosophy, history, apologetics, archaeology, and other sciences are given careful consideration.
No one who is interested in human history, past and present, can ignore the Catholic Church, either as an institution which has been the central figure in the civilized world for nearly two thousand years, decisively affecting its destinies, religious, literary, scientific, social and political, or as an existing power whose influence and activity extend to every part of the globe. In the past century the Church has grown both extensively and intensively among English-speaking peoples. Their living interests demand that they should have the means of informing themselves about this vast institution, which, whether they are Catholics or not, affects their fortunes and their destiny.
Browse the Catholic Encyclopedia by Topic
Copyright © Catholic Encyclopedia. Robert Appleton Company New York, NY. Volume 1: 1907; Volume 2: 1907; Volume 3: 1908; Volume 4: 1908; Volume 5: 1909; Volume 6: 1909; Volume 7: 1910; Volume 8: 1910; Volume 9: 1910; Volume 10: 1911; Volume 11: - 1911; Volume 12: - 1911; Volume 13: - 1912; Volume 14: 1912; Volume 15: 1912
Catholic Online Catholic Encyclopedia Digital version Compiled and Copyright © Catholic Online
| 432
|
Don't miss our great houseplant growing section with self watering planters, high quality potting soil, organic houseplant fertilizers, sprouting bags and more.
In the late 1980s, a study by NASA and the Associated Landscape Contractors of America (ALCA) resulted in excellent news for homeowners and office workers everywhere. The study concluded that common houseplants such as bamboo palms and spider plants not only make indoor spaces more attractive, they also help to purify the air!
The study was conducted by Dr. B.C. Wolverton, Anne Johnson, and Keith Bounds in 1989. While it was originally intended to find ways to purify the air for extended stays in orbiting space stations, the study proved to have implications on Earth as well.
Newer homes and buildings, designed for energy efficiency, are often tightly sealed to avoid energy loss from heating and air conditioning systems. Moreover, synthetic building materials used in modern construction have been found to produce potential pollutants that remain trapped in these unventilated buildings.
The trapped pollutants result in what is often called the Sick Building Syndrome. With our ultra modern homes and offices that are virtually sealed off from the outside environment, this study is just as important now as when it was first published.
While itís a well known fact that plants convert carbon dioxide into oxygen through photosynthesis, the NASA/ALCA study showed that many houseplants also remove harmful elements such as trichloroethylene, benzene, and formaldehyde from the air.
NASA and ALCA spent two years testing 19 different common houseplants for their ability to remove these common pollutants from the air. Of the 19 plants they studied, 17 are considered true houseplants, and two, gerbera daisies and chrysanthemums, are more commonly used indoors as seasonal decorations.
The advantage that houseplants have over other plants is that they are adapted to tropical areas where they grow beneath dense tropical canopies and must survive in areas of low light. These plants are thus ultra-efficient at capturing light, which also means that they must be very efficient in processing the gasses necessary for photosynthesis. Because of this fact, they have greater potential to absorb other gases, including potentially harmful ones.
In the study NASA and ALCA tested primarily for three chemicals: Formaldehyde, Benzene, and Trichloroethylene. Formaldehyde is used in many building materials including particle board and foam insulations. Additionally, many cleaning products contain this chemical. Benzene is a common solvent found in oils and paints. Trichloroethylene is used in paints, adhesives, inks, and varnishes.
While NASA found that some of the plants were better than others for absorbing these common pollutants, all of the plants had properties that were useful in improving overall indoor air quality.
NASA also noted that some plants are better than others in treating certain chemicals.
For example, English ivy, gerbera daisies, pot mums, peace lily, bamboo palm, and Mother-in-law's Tongue were found to be the best plants for treating air contaminated with Benzene. The peace lily, gerbera daisy, and bamboo palm were very effective in treating Trichloroethylene.
Additionally, NASA found that the bamboo palm, Mother-in-law's tongue, dracaena warneckei, peace lily, dracaena marginata, golden pathos, and green spider plant worked well for filtering Formaldehyde.
After conducting the study, NASA and ALCA came up with a list of the most effective plants for treating indoor air pollution.
The recommended plants can be found below. Note that all the plants in the list are easily available from your local nursery.
For an average home of under 2,000 square feet, the study recommends using at least fifteen samples of a good variety of these common houseplants to help improve air quality. They also recommend that the plants be grown in six inch containers or larger.
Here is a list of resources for more information on this important study:
PDF files of the NASA studies related to plants and air quality:
| 867
|
const void * memchr ( const void * ptr, int value, size_t num );
void * memchr ( void * ptr, int value, size_t num );
Locate character in block of memory
Searches within the first num bytes of the block of memory pointed by ptr for the first occurrence of value (interpreted as an unsigned char), and returns a pointer to it.
Both value and each of the bytes checked on the the ptr array are interpreted as unsigned char for the comparison.
- Pointer to the block of memory where the search is performed.
- Value to be located. The value is passed as an int, but the function performs a byte per byte search using the unsigned char conversion of this value.
- Number of bytes to be analyzed.
sizet is an unsigned integral type.
A pointer to the first occurrence of value in the block of memory pointed by ptr.
If the value is not found, the function returns a null pointer.
In C, this function is only declared as:
void * memchr ( const void *, int, size_t );
instead of the two overloaded versions provided in C++.
/* memchr example */
int main ()
char * pch;
char str = "Example string";
pch = (char*) memchr (str, 'p', strlen(str));
printf ("'p' found at position %d.\n", pch-str+1);
printf ("'p' not found.\n");
- Compare two blocks of memory (function
- Locate first occurrence of character in string (function
- Locate last occurrence of character in string (function
| 590
|
NICAEA, ARABIC CANONS OF, name applied to several series of canons that are missing in the Greek or Latin canonical collections. They appear to have been reworked from the Syriac, at least in part. In the latter language the texts attributed to the Council of NICAEA in 325 are said to have come from the pen of the bishop Maruta of Maipherkat (in Arabic Mayyafaraqin, today a town in Turkey). At all events, who the translator, or rather the adapter, was is not known, nor at what date the canons were adopted by the Copts. It will be noted—this is not a proof that they were previously unknown—that in his Nomocanon the twelfth-century patriarch GABRIEL II IBN TURAYK knew only the twenty canons counted in the Greek collections, while MIKHA’IL, bishop of Damietta, cited two series of canons of Nicaea, one of twenty canons and one of eighty-four. Given that the grouping of these texts diverges greatly in the manuscripts, it has seemed better to follow the exposition given by Abu al-Barakat IBN KABAR in his religious encyclopedia Misbah al-Zulmah. This passage was translated into French in J. M. Vansleb's Histoire de l'église d'Alexandrie (1677, pp. 265ff.).
Ibn Kabar divides the documents attributed to the Council of Nicaea into three books. In the first book (according to him, it is the second in the Greek collections) he groups a history of CONSTANTINE I and his mother, Helena, as well as a presentation of his incentives for the convocation of the council, which forms a kind of introduction. The collection of Macarius, a monk of Dayr Abu Maqar in the fourteenth century, adds at this point a list of heresies and sects and a list of the 318 bishops who participated. Then comes the series of twenty authentic canons, according to the Melchite recension, followed by the Coptic series of thirty (sometimes thirty-three) canons concerning anchorites, monks, and the clergy. W. Riedel (1968, pp. 38, 1791) asked if this was not a reworking of the Syntagma ad monachos attributed to Saint Athanasius.
As to the second book, Ibn Kabar tells us, "The Melchites and the Nestorians have translated [the second book] and the Jacobites have adopted it." It is a series of eighty-four (sometimes eighty) canons. This division would perhaps indicate that the original text was continuous.
The third book contains the "Books of the Kings," which are themselves divided into four books and also exist independently. This is a collection of the legislation enacted by the Byzantine emperors Constantine, Theodosius, and Leo. Here these canons are attributed to the Council of Nicaea. It appears that the Christians of the Orient adopted these texts in defiance of the Muslims, who referred to the Shari‘ah, or Muslim sacred law, for guidance in purely civil matters such as marriages, inheritances, and the like.
These texts provide numerous translations. The first book gives a history of the emperor Constantine and his mother and relates the story of the council, as well as the reasons for the convocation of the bishops. It includes the twenty authentic canons followed by the thirty canons called Arabic and gives the history, or prehistory, of the Council of Nicaea in a rather free Latin translation by Abraham Ecchellensis (Ibrahim al-Haqilani), a celebrated Maronite deacon. The "Thirty Canons Relative to the Monks and Clergy" are given in Latin by the same author in a paraphrase rather than a true translation. The list of heresies is given in German translation by A. Harnack (1899, pp. 14-71). The list of the bishops according to the Coptic texts is examined by, among others, F. Haase (1920, pp. 81-92). As for the eighty-four canons, they will be found in a paraphrase by Abraham Ecchellensis in J. D. Mansi (cols. 1029-1049).
The enormous mass of the documents relating, rightly or wrongly, to the first council, which played a considerable role in the East more than anywhere else, is organized in the collection of Macarius into four books. The difference between his division and that of Ibn Kabar is that Macarius' second book comprises not all the eighty-four canons but only the first thirty-two. Canons forty-eight to seventy-three, combined with the thirty concerning anchorites, monks, and clergy, form the third book, the fourth containing only the Coptic recension of the twenty official canons. The "Four Books of the Kings" have with him a place apart.
The Arabic Canons of Nicaea are, in the strict sense, the eighty-four canons adapted from the Syriac by the Melchites and borrowed by the Copts. In addition to this series of eighty-four canons in Arabic literature, the literature in the Coptic language contains a series that has not survived in Arabic translation, called Gnômès. It is credited to the Council of Nicaea and gives moral exhortations, which probably reflect the discipline in force in the fourth century in the church of Alexandria. It was published and translated into French by E. Revillout (1873, pp. 210-88; and 1875, pp. 5-77, 209-266).
Click tabs to swap between content that is broken into logical sections.
| 490
|
scintillation counterArticle Free Pass
scintillation counter, radiation detector that is triggered by a flash of light (or scintillation) produced when ionizing radiation traverses certain solid or liquid substances (phosphors), among which are thallium-activated sodium iodide, zinc sulfide, and organic compounds such as anthracene incorporated into solid plastics or liquid solvents. The light flashes are converted into electric pulses by a photoelectric alloy of cesium and antimony, amplified about a million times by a photomultiplier tube, and finally counted. Sensitive to X rays, gamma rays, and charged particles, scintillation counters permit high-speed counting of particles and measurement of the energy of incident radiation.
What made you want to look up "scintillation counter"? Please share what surprised you most...
| 67
|
Earthquake off the Coast of Venezuela
A strong earthquake struck off the Venezuelan coast on September 12, 2009. The magnitude 6.3 earthquake occurred underneath the Caribbean Sea near the city of Puerto Cabello. According to reports from Reuters some buildings in the Venezuelan countryside were damaged, but there were few injuries.
The United States Geological Survey indicated that the earthquake epicenter was in the region of the San Sebastián and El Pilar faults, both of which are seismically active. The fault zone is near the boundary of the South American and Caribbean tectonic plates. The Caribbean plate is moving 20 millimeters (0.8 inches) per year with respect to the South American Plate. Previous earthquakes in the region occurred in 1989 and 1967.
This image originally appeared on the Earth Observatory. Click here to view the full, original record.
| 215
|
In our Torah portion this week, it is written that Jacob "came to a certain place and stayed there that night" (Gen. 28:11). The Hebrew text, however, indicates that Jacob did not just happen upon a random place, but rather that "he came to the place" -- vayifga bamakom (וַיִּפְגַּע בַּמָּקוֹם). The sages therefore wondered why the Torah states bamakom, "the place," rather than b'makom, "a place"? Moreover, the verb translated "he came" is yifga (from paga': פָּגַע), which means to encounter or to meet, suggesting that Jacob's stop was a divine appointment.
The Hebrew word makom ("place") comes from the verb kum (קוּם), meaning "to arise," and in Jewish tradition, ha-makom became a Name for God. The early sages therefore interpreted the verse to mean that Jacob actually had his dream while in Jerusalem rather than in Bethel... Indeed, the Talmud identifies "the place" Jacob encountered as Mount Moriah - the location of the Akedah - based on the language used in Genesis 22:4: "On the third day, Abraham raised his eyes and saw the place (הַמָּקוֹם) in the distance" (Sanhedrin 95b, Chulin 91b). If that is the case (i.e., if Jacob had been miraculously transported south from the mountains of Bet El to what would later be called Jerusalem), then Jacob's dream of the ladder would have functioned as a revelation of the coming glory of the resurrected Messiah - the Promised Seed whom Isaac foreshadowed and through whom all the families of the earth would be blessed. It was Yeshua, the Angel of the LORD, who came to "descend" (as the Son of Man) and to "rise" (as the resurrected LORD) to be our mediator before God (see John 1:47-51). Perhaps the Talmud makes the claim that Jacob's vision occurred in Jerusalem because Bethel later became the site for one of two idolatrous shrines (i.e., the golden calves at Bethel and Dan) established by King Jeroboam of the Northern Kingdom which he set up to discourage worship at Solomon's Temple in the City of Jerusalem (see 1 Kings 12:28-29).
At any rate, the Hebrew word for "intercessor" (i.e., mafgia: מַפְגִּיעַ) comes from the same verb (paga') mentioned in our verse. Yeshua is our Intercessor who makes "contact" with God on our behalf. Through His sacrifice for our redemption upon the cross (i.e., his greater Akedah), Yeshua created a meeting place (paga') between God and man. Therefore we see the later use of paga' in Isaiah 53:6, "...the Lord laid on him (i.e., hifgia bo: הִפְגִּיעַ בּוֹ) the iniquity of us all," indicating that our sins "fell" on Yeshua as He made intercession for us (i.e., yafgia: יַפְגִּיעַ) for us (Isa. 53:12). Because of Yeshua, God touches us and we are able to touch God... And today, our resurrected LORD "ever lives to make intercession (paga') for us" (Heb. 7:25). He is still touched by our need and sinful condition (Heb. 4:15).
כֻּלָּנוּ כַּצּאן תָּעִינוּ אִישׁ לְדַרְכּוֹ פָּנִינוּ
וַיהוָה הִפְגִּיעַ בּוֹ אֵת עֲוֹן כֻּלָּנוּ
kul·la·nu katz·on ta·i·nu, ish le·dar·ko pa·ni·nu
vadonai hif·gi·a bo, et a·von kul·la·nu
"All we like sheep have gone astray; we have turned each to his own way;
but the LORD has laid on him the iniquity of us all."
Paga' is also a term for warfare or violent meetings, and this alludes to the collision between the powers of hell and the powers of heaven in the outworking of God's plan of redemption: "... he (i.e., the Savior/Messiah) will crush your head (ראשׁ), and you (i.e., the serpent/Satan) will crush his heel (עָקֵב)." This was the original prophecy of redemption, an encounter with evil that would provide atonement and retribution (see the "Gospel in the Garden"). Rabbi Yechezkel Levenstein, the mashgiach of Ponevezh, points out that the entire future of the Jewish people hinged on the vision given to Jacob - and in Jacob's response to it. Had he been prevented to return (i.e., through Laban's schemes to keep him in Charan), the Jewish people would have become enslaved and assimilated into the people of Aram, and ultimately the Messiah Himself would not have been born. Laban, then, embodied the desire of Satan to thwart the coming of the Promised Seed, and therefore he may be compared to Pharaoh, who likewise tried to enslave Israel in Egypt...
As I mentioned in my additional commentary on parashat Balak, Laban's worship of the serpent (nachash) led him to become one of the first enemies of the Jewish people (see "The Curses of Laban"). He tried to make Jacob a slave from the beginning, later claiming that all his descendants and possessions belonged to him (Gen. 31:43). After Jacob escaped from his clutches, Laban had a son named Beor (בְּעוֹר) who became the father of the wicked prophet Balaam (בִּלְעָם). In other words, the "cursing prophet" Balaam was none other than the grandson of diabolical Laban. Here is a diagram to help you see the relationships:
In Jewish tradition, Laban (the patriarch of Balaam) is regarded as even more wicked than the Pharaoh who enslaved the Jews in Egypt. This enmity is enshrined during the Passover Seder when we recall Laban's treachery as the one who "sought to destroy our father, Jacob." Spiritually understood, Laban's hatred of Jacob (i.e., Israel) was intended to eradicate the Jewish nation at the very beginning. Had Laban succeeded, Israel would have been assimilated and disappeared from history, and more radically, God's plan for the redemption of humanity through the Promised Seed would have been overturned....
Thankfully, Jacob was enabled by God's grace to overcome Laban and to return to the Promised Land, and even more thankfully, the Messiah was able to crush the rule of Satan through His atoning sacrifice and resurrection at Moriah. Yeshua, our ascended LORD, is ha-makom - the place where we encounter the Living God....
The authority and reign of Satan has been gloriously vanquished by Yeshua our Savior, blessed be He, though there is coming a time of judgment for all who dwell upon the earth. The time immediately preceding the appearance of the Messiah will be a time of testing in which the world will undergo various forms of tribulation, called chevlei Mashiach (חֶבְלֵי הַמָּשִׁיחַ) - the "birth pangs of the Messiah" (Sanhedrin 98a; Ketubot, Bereshit Rabbah 42:4, Matt. 24:8). Some say the birth pangs are to last for 70 years, with the last 7 years being the most intense period of tribulation -- called the "Time of Jacob's Trouble" / עֵת־צָרָה הִיא לְיַעֲקב (Jer. 30:7). The climax of the "Great Tribulation" (צָרָה גְדוֹלָה) is called the great "Day of the LORD" (יוֹם־יהוה הַגָּדוֹל) which represents God's wrath poured out upon a rebellious world system. On this fateful day, the LORD will terribly shake the entire earth (Isa. 2:19) and worldwide catastrophes will occur. "For the great day of their wrath has come, and who can stand?" (Rev. 6:17). The prophet Malachi likewise says: "'Surely the day is coming; it will burn like a furnace. All the arrogant and every evildoer will be stubble, and that day that is coming will set them on fire,' says the LORD Almighty. 'Not a root or a branch will be left to them'" (Mal. 4:1). Only after the nations of the world have been judged will the Messianic kingdom (מַלְכוּת הָאֱלהִים) be established upon the earth. Yeshua will return to Jerusalem to establish His glorious kingdom (as foretold by the prophets) and then "all Israel will be saved." The Jewish people will finally understand that Mashiach ben Yosef (the Suffering Servant) and Mashiach ben David (the anointed King of Israel) are one and the same... The 1,000 year reign of King Messiah will then commence (Rev. 20:4).
Presently our responsibility is to come to "the place" (ha-makom) where God's work of redemption was completed - that is, to the Cross of Yeshua. There we turn to God in repentance (teshuvah) and consign our sins to the judgment borne for us through Yeshua's sacrifice as our kapporah (atonement). By faith we understand that the resurrected Savior is forever ha-makom, "the place" where God meets with us, and we learn to abide in His gracious Presence by means of the Holy Spirit. We cease striving to justify ourselves (i.e., by virtue of works), but instead receive God's love and Spirit into our hearts. This means that we will study the Scriptures (truth), obey the Torah of Yeshua and His emissaries, and share the good message of God's redemption with a lost and dying world...
We are fast approaching, however, the prophesied "End of Days" (acharit hayamim), when the LORD will return to earth to "settle accounts" with its inhabitants (including those who profess to obey Him). We do not have much more time, chaverim. We must encourage people to call upon the LORD for salvation before it is too late...
כִּי־כֵן אהֵב אֱלהִים אֶת־הָעוֹלָם
עַד־אֲשֶׁר נָתַן בַּעֲדוֹ אֶת־בְּנוֹ אֶת־יְחִידוֹ
וְכָל־הַמַּאֲמִין בּוֹ לא־יאבַד
כִּי בוֹ יִמְצָא חַיֵּי עוֹלָם׃
ki-khen o·hev E·lo·him et-ha·o·lam,
ad-a·sher na·tan ba·a·do et-be·no et-ye·chi·do,
ve·khol-ha·ma·a·min bo, lo-yo·vad
ki vo yim·tza cha·yei o·lam
"For God so loved the world that he gave his only and unique Son,
so that whoever trusts in Him should not be destroyed, but have eternal life"
Hebrew Study Card
| 796
|
Vienna Philharmonic asks historians to look into alleged Nazi past
Historian says orchestra demonstrated sympathy for Austria's Nazi leadership during Holocaust.
The Vienna Philharmonic has asked three historians to research the orchestra's alleged Nazi past.
The announcement on January 22 came after Harald Walser, a historian and Parliament member for the Austrian Greens, said in an interview that the orchestra demonstrated sympathy for the country’s Nazi leadership during World War Two.
Historians Fritz Truempi, Oliver Rathkolb and Bernadette Mayrhofer will look into the "politicization" of the Vienna Philharmonic from 1938 to 1945, the fate of its Jewish musicians during that time and its relations with Nazis afterward, according to an orchestra statement, the French news agency AFP reported.
Their report is due in March.
Walser has called for forming a committee of inquiry into the role of the philharmonic during those years and said the orchestra has not released all its documents from the Nazi era or has destroyed some of them.
He cited a listing on the philharmonic’s official website that describes a concert delivered on New Year’s Day of 1939 as a “sublime homage to Austria,” when it actually was a celebration of the country's unification with Nazi Germany in 1938.
The New Year's Concert of the Vienna Philharmonic takes place each year on the morning of January 1 in Vienna and is broadcast to an estimated audience of 50 million in 73 countries.
Walser claims that after the war, an emissary of the Vienna Philharmonic gave a new copy of its honor ring in 1966 to Nazi war criminal Baldur von Schirach, who was responsible for the deportation of tens of thousands of Austrian Jews to death camps, following his release from Berlin’s Spandau Prison for war criminals. Von Schirach had received the original ring in 1942.
Six Jewish musicians from the philharmonic were murdered by the Nazis in Austria and 11 were deported to death camps, according to reports.
| 591
|
- About LENS
- About Neurofeedback
- LENS Treated Disorders
- David Dubin
- LENS Videos
- Contact Us
Archive for author David Dubin
Los Angeles based LENS Neurofeedback provider and blogger.
More posts by David Dubin
Dr. David Dubin
After Einstein’s death, his brain was preserved for future study. Scientists were naturally curious to see how the brain of this genius compared with the brain of a person of ordinary intelligence. Would there be an abundance of neurons (grey matter) or some unusual wiring of the neurons that distinguished his brain? When the brain was dissected, however, the only difference was that the numbers of cells that were not neurons (white matter) was dramatically increased. It is also true that, from evolutionary point of view, as brains became larger and “smarter”, what increased was not the percentage of neurons but of white matter. What does this mean?
When most of us--scientists and lay people alike—imagine the brain, we think of neurons, those cells carrying information in the form of electrical impulses. Neurons are ‘brains of the brain’, so to speak, and the rest of the cells were thought to be there only for support. But neurons account for only 15 percent of the brain, while these so-called ‘support’ cells occupy 85 percent.
The group name for white matter cells, glia—derived from the world ‘glue’—reflects their lowly status. First seen clearly by anatomists in the late 1800’s, Glia were initially thought to be little more than structural support for neurons, because, like scaffolding, glial cells literally hold neurons in place.
It was later found that glia can speed transmission of electrical signals and also deliver energy to neurons and remove neuronal waste products. While appearing to be a little more interesting than originally thought, glia still seemed about as sexy as wire insulation, food delivery and waste management devices.
Unlike neurons, there is no electrical activity within glia to send messages and information. It was therefore assumed that glia were deaf and dumb, incapable of communicating with either neurons or other glia, and therefore not particularly compelling as a focus of research. A good analogue would be the under-appreciated dark matter in astronomy. Dark matter is undetectable because it emits no electromagnetic radiation as the matter in the “visible” universe does. The existence of dark matter was eventually inferred from its gravitational effects on visible matter. While we had believed that the visible universe is the universe, the ordinary matter of our visible universe accounts for less than five percent of the total; dark matter accounts for more than 20 percent.
Today, the pace of knowledge about glia has begun to accelerate, as outlined in an exciting new book, The Other Brain, by Dr. R. Douglas Fields[i] (the title refers to the 85 percent of the brain that is glial). Fields is a neuroscientist specializing in glial cell research, and the information in his book is so new that it isn’t found in standard medical textbooks. Two review articles in the May, 2010 issue of the research journal Nature Neuroscience attest to how much still needs to be learned, and how potentially revolutionary are the implications.
So, are glial cells really dumb as a doorknob? First, we are just now learning that glial cells do communicate, not through synapses but through “gap junctions”. These gap junctions are protein channels connecting one cell to another, like a spaceship docking at the mother station. Glia can pass messages among themselves by using calcium as a chemical messenger instead of sending an electric signal as neurons do. In his research, Fields showed that after a 15-second delay, changes in response to a neuronal firing were seen in the surrounding glial cells. As Fields puts it, glial cells are “listening in” on what neurons are doing, something virtually no one in neuroscience thought possible.
Contrary to all established dogma, it is now known that glia not only communicate directly from one cell to an adjacent one, but also with cells very far away. Glia are even able to “jump” over barriers like a ping pong ball going over a net. And whereas neurons transmit their signals in linear lines, like telephone wires, glia communicate, as Fields puts it, by “broadcasting signals widely, like cell phones.”
Glia are also critical to the growth of neurons. Neuron cells grown in the lab without accompanying glial cells were found to have many fewer synapses than neurons grown with glial cells. Glia seem to play a central role in the number of synapses a neuron develops.
Contrary to what scientists thought, glia also have neurotransmitters; in fact, the same ones that neurons do. And there are receptors for these neurotransmitters both inside and on the outer surface of glial cells. Glial cells have receptors for glutamate, the principal stimulating neurotransmitter in the cortex, and GABA, which acts as a “brake” to calm down neurons. In other words, glia can excite or depress neurons and stimulate or calm the brain, just like medications.
And, unlike neurons, glia can move. They have enormous cellular “fingers” like the elastic Mr. Fantastic of comic book fame, and can move between and on neurons. This constantly changes the circuitry of the brain. These glial fingers also form around synapses. They secrete substances that remodel tissue or stimulate neuron growth during development and repair of the brain making it likely that they function in a similar role during learning in the healthy brain.
Glia repair injury, defend against disease, nurse neurons back to health and act as guide dogs for the re-growth of injured nerve fibers. Glia detect and react to bacteria and viruses, “gobble up” pathogens and release toxic chemicals to kill bacteria. And new research suggests that immature glial cells can act like stem cells and mature glia can stimulate stem cells dormant in the adult brain to form replacement neurons and glia. This could have implications for repair of the nervous system, including new possibilities for treating spinal cord injuries.
This is about as far removed from mere insulation, food delivery and waste management services as can be imagined. Glia are a lot smarter than we thought they were. A 2005 study shows a correlation between organization of fibers made of glial cells and IQ. Finding a greater proportion of glial cells in Einstein’s brain is not so surprising after all.
We still know very little about glia—even the basics such as how many kinds of glial cells there are and what they look like in detail. Their discovery, however, broadens our appreciation of the complexity of the brain. The brain, with its 100 billion neurons and an average of 10,000 synapses per neuron, has more potential connections than the atoms of our galaxy!
We don’t know yet if diet, exercise, supplements and other factors affect glial cells. However, the implications for health and illness—seizures, infections, cancer, addictions, mental illness and diseases such as Parkinson’s and multiple sclerosis may be far-reaching and profound.
As Fields says near the end of his book, “Here are cells that can build the brain of a fetus, direct the connection of its growing axons to wire up the nervous system, repair it after it is injured, sense impulses crackling through axons and hear synapses speaking, control the signals neurons use to communicate with one another at synapses, provide the energy source and substrates for neurotransmitters to neurons, couple large areas of synapses and neurons into functional groups, integrate and propagate the information they receive from neurons through their own private network, release neurotoxic or neuroprotective factors, plug and unplug synapses, move themselves in and out of the synaptic cleft, give birth to new neurons, communicate with the vascular and immune systems, insulate the neuronal lines of communication, and control the speed of impulse traffic through them. And some people ask, ‘Could these cells have anything to do with higher brain function?’ How could they possibly not?”
[i] FIELDS, R. Douglas, The Other Brain, Simon & Schuster, December 2009, 384 p.
People with obsessive-compulsive disorder, or OCD, have recurrent thoughts and behaviors that can be crippling. What follows is a discussion of the biology of the disorder and several aspects of treatment.
Obsessive compulsive disorder is not a single disorder; rather, it’s of a cluster of conditions. In OCD, sufferers might obsess and be anxious and compulsive about hoarding, cleaning, ordering and checking. Patients can also exhibit body dysmorphic disorder (BDD), where they imagine possessing a defect in physical appearance. Other diseases that overlap with OCD include Tourette’s syndrome and hypochondria. OCD also has a genetic component and runs in families; relatives of someone with OCD are 8 times more likely to present symptoms.
The areas of the brain that appears involved with OCD are the orbito-frontal cortex (OFC), a center for decision-making, and the thalamus, which filters and relays information. In these brain regions, the neurotransmitter glutamate is responsible for neuronal signaling. It’s thought that the deficit of glutamate production and function might contribute to the condition of OCD and other counter-productive behavior, including making decisions based on inappropriately perceived danger.
Obsessive Compulsive Disorder Treatment
The neurotransmitter serotonin may play an important role in whether someone gets obsessive compulsive disorder. Researchers have found a defect in the gene that makes a protein that “mops up” serotonin from between neurons. When there’s too much of this protein there is not enough serotonin, and that’s what is found in some with OCD. This is why Serotonin Re-uptake Inhibitors (SRIs) such as Prozac, which makes serotonin more available to the brain, are perhaps the most popular OCD treatment.
Another commonly used OCD treatment is exposure and response prevention (ERP), where the patient is exposed to stimuli that trigger the repetitive behavior but do not allow the patient to actually perform the compulsive behavior. Eventually the patient can learn that nothing bad happens when they don’t act out their compulsion.
Unfortunately, ERP is a stressful treatment for patients to endure. And significant numbers of patients drop out of treatment. Various drugs, such as the SRIs, are now being used in conjunction with ERP.
Anxiety usually is significant part of obsessive compulsive disorder. While anxiety does not appear to be the actual cause of OCD, anxiety can drive persistent thoughts and behaviors. A reduction in anxiety can be important in the treatment of OCD. Various modalities for treating anxiety include medication, neurofeedback (both traditional and LENS Neurofeedback), and/or behavioral approaches.
When anxiety is successfully brought under control, there are not only fewer obsessive thoughts, but those obsessive thoughts that do persist become less prominent. Instead being the dominant focus, compulsive become background music as opposed to a loud concert. These thoughts demand less attention and this makes it easier to control compulsive behavior.
| 708
|
|Skip Navigation Links|
|Exit Print View|
|Creating and Administering Oracle Solaris 11 Boot Environments Oracle Solaris 11 Information Library|
A boot environment is a bootable instance of the Oracle Solaris operating system image plus any other application software packages installed into that image. System administrators can maintain multiple boot environments on their systems, and each boot environment can have different software versions installed.
Upon the initial installation of the Oracle Solaris release onto a system, a boot environment is created. You can use the beadm(1M) utility to create and administer additional boot environments on your system.
Note - In addition, the Package Manager GUI provides some options for managing boot environments.
Note the following distinctions relevant to boot environment administration:
A boot environment is a bootable Oracle Solaris environment consisting of a root dataset and, optionally, other datasets mounted underneath it. Exactly one boot environment can be active at a time.
A dataset is a generic name for ZFS entities such as clones, file systems, or snapshots. In the context of boot environment administration, the dataset more specifically refers to the file system specifications for a particular boot environment or snapshot.
A snapshot is a read-only image of a dataset or boot environment at a given point in time. A snapshot is not bootable.
A clone of a boot environment is created by copying another boot environment. A clone is bootable.
Shared datasets are user-defined directories, such as /export, that contain the same mount point in both the active and inactive boot environments. Shared datasets are located outside the root dataset area of each boot environment.
Note - A clone of the boot environment includes everything hierarchically under the main root dataset of the original boot environment. Shared datasets are not under the root dataset and are not cloned. Instead, the boot environment accesses the original, shared dataset.
A boot environment's critical datasets are included within the root dataset area for that environment.
| 544
|
Seeing wildlife when on a camp out can be a real highlight of your adventure. These sightings can be a real treat but can also be dangerous. Attacks by mountain lions on humans have happened but are rare. Knowing what to do when you’re caught in this situation is very important.
Mountain lions are also known as pumas, panthers, or cougars. Another name for them is catamount. These creatures are shy and isolated so they are seldom seen. If you happen to come upon one of these cougars, think smart. Cluster together with your hiking companions, representing yourself as a big, noisy group. If you’re hiking alone, extend your arms or pack up as high as you can so you look bigger. Wave your trekking poles and shout. Never run or bend down as this will make you look like prey. Call children to come next to you.
These cats roam anywhere from central Canada down to Patagonia. They can be anywhere from sea level up to the high alpine areas. They can survive from swamps to forests to timberline.
The female will fiercely protect its young. Don’t even approach that darling, little kitten. Back away slowly if you happen to come upon an adult or baby. If the cat continues to follow you, throw sticks or rocks while backing away.
A camper’s fear of mountain lions is unjustified. They’re not going to come and get you in your tent while you sleep. These animals are rarely seen, but you may be able to catch a fleeting glimpse of one. Even though sightings are rare, be sure to use caution at all times. These cats’ stealth and adaptability allow them to survive well in a wide environment.
They’ve been known to jump 20 feet in one leap. The puma creeps up on its prey until it’s close enough to leap upon it. Their diet consists of deer, rabbits, turkeys, and grouse. Juvenile cats are trying to establish their territory while the mother cats will protect their young.
My own personal sightings have occurred in areas with an abundance of young rabbits. My sightings have been in the morning hours or at dusk. When my wife and I were in the High Uintahs, we witnessed a Canadian Lynx. It was in an area where lots of naïve, young cottontail rabbits abound.
| 714
|
MONTPELIER — Vermont students’ vocabulary comprehension in the fourth and eighth grades are above national averages, according to data released by the National Assessment of Educational Progress.
Challis Breithaupt, NAEP state coordinator at Vermont Department of Education, said it was the first-ever national vocabulary report.
“There was a new framework for reading in 2009 and the feeling was there needed to be a criteria developed for vocabulary,” Breithaupt said.
The measure would assess more of students’ ability to understand vocabulary and their ability to acquire meaning from passages they were reading.
“Helping students to increase their vocabulary and to feel comfortable using words in various contexts is paramount,” state Education Commissioner Armando Vilaseca said in a statement. “There is significant research in the field supporting a link between vocabulary and comprehension.”
On a scale of 0-500, fourth-graders scored 224 and eighth-graders scored 274. The national average for fourth- and eighth-graders was 217 and 263, respectively.
Only Connecticut, Massachusetts, Montana and North Dakota scored higher than Vermont’s eighth-grade vocabulary.
“Students use their knowledge of words in order to understand what they are reading, to identify ideas and themes,” said Vilaseca. “Summer reading programs continue to support the good work that is done throughout the school year; keeping our children’s minds active supports strong reading and comprehension skills.”
The NAEP addressed the method of the vocabulary test in its results: “Unlike traditional tests of vocabulary that ask students to write definitions of words in isolation, NAEP always assesses word meaning within the context of particular passages. Students are asked to demonstrate their understanding of words by recognizing what meaning the word contributes to the passage in which it appears.”
For more information, The report card can be found online: www.nationsreportcard.gov.MORE IN Vermont News
- Most Popular
- Most Emailed
- MEDIA GALLERY
| 30
|
Simple technique appears to be safe and effective, review suggests
By Robert Preidt
MONDAY, Oct. 15 (HealthDay News) -- A technique called the "mother's kiss" is a safe and effective way to remove foreign objects from the nostrils of young children, according to British researchers.
In the mother's kiss, a child's mother or other trusted adult covers the child's mouth with their mouth to form a seal, blocks the clear nostril with their finger, and then blows into the child's mouth. The pressure from the breath may expel the object in the blocked nostril.
Before using it, the adult should explain the technique to the child so that he or she is not frightened. If the first attempt is unsuccessful, the technique can be tried several times, according to a review published in the current issue of the CMAJ (Canadian Medical Association Journal).
For their report, researchers in Australia and the United Kingdom examined eight case studies in which the mother's kiss was used on children aged 1 to 8 years.
"The mother's kiss appears to be a safe and effective technique for first-line treatment in the removal of a foreign body from the nasal cavity," Dr. Stephanie Cook, of the Buxted Medical Centre in England, and colleagues concluded. "In addition, it may prevent the need for general anesthesia in some cases."
Further research is needed to compare various positive-pressure techniques and test how effective they are in different situations where objects are in various locations and have spent different lengths of time in the nasal passages, the researchers noted in a journal news release.
The U.S. National Library of Medicine has more about foreign objects in the nose.
SOURCE: CMAJ (Canadian Medical Association Journal), news release, Oct. 15, 2012
Copyright © 2012 HealthDay. All rights reserved. URL:http://www.healthscout.com/template.asp?id=669525
| 332
|
Project looked at massive mobile launch platform, shuttle transporter
ALBUQUERQUE, N.M. - Sandia National Laboratories recently conducted a series of tests to help NASA understand the fatigue on the space shuttle caused during rollout from the Kennedy Space Center assembly building to the launch pad - a four-mile trip.
The tests are part of NASA's return-to-flight mission, with the first flight scheduled between May 15 and June 3.
Sandia, a National Nuclear Security Administration laboratory, helped NASA design the test and instrumentation to measure the dynamic vibration environment during rollout. Sandia also computed the input forces the crawler applies to the Mobile Launch Platform (MLP). These computations are being used by Boeing and NASA to determine the fatigue life for critical shuttle components.
Sandia engineer Tom Carne assisted in a series of tests beginning in November 2003 to develop the data necessary to understand the environment and the response of the space shuttle vehicle during rollout.
"NASA requested Sandia to assist them in this project because of our expertise in planning and conducting structural dynamic tests on very large structures," Carne says.
Sandia's solid mechanics/structural dynamics group has done numerous structural analysis projects on large structures including the I-40 Rio Grande bridge in Albuquerque, large wind turbines up to 110 meters tall, and the Department of Energy's Armored Tractor. One of the group's main missions is analysis and testing of the shock and vibration environments for weapons.
The three-million-pound shuttle sits on the eight-million-pound mobile launch platform, which is carried by a six-million-pound crawler. The crawler transports the vehicle and platform four miles from the Vehicle Assembly Building to the launch pad.
Moving the shuttle that distance, which normally takes five to six hours at 0.9 mph, had been considered a relatively low-stress process during most of the life of the shuttle system. As the equipment ages, however, more emphasis is being given to understanding how the rollout may fatigue the transport system.
Data were collected for rollouts of the MLP-only and the MLP with the two solid rocket boosters, at five different speeds ranging from 0.5 to 0.9 mph. For the tests more than 100 accelerometers were placed on the MLP, crawler, and solid rocket boosters. A data acquisition system installed inside the MLP for the road test measured and recorded the accelerations. The data were analyzed so that the character of the rollout environment is understood and can be analytically imposed on the shuttle using a finite-element computer model to predict fatigue damage to critical components. Even though these stresses are much lower than those seen during the launch, the five- to six-hour duration of the transport and the low-frequency vibration could cause fatigue in components within the orbiter.
Carne says the rollout analysis team determined that there are two families of forcing harmonics caused by the crawler drive train that vibrate the platform as a function of crawler speed, in addition to the random inputs induced by the road bed. Fortunately, he says, the harmonic forcing frequencies can be adjusted by merely changing the drive speed of the crawler, resulting in less damaging frequencies.
The team used a Sandia-developed algorithm, the Sum of Weighted Accelerations Technique (SWAT), to estimate the applied forces. Carne says the SWAT results were beneficial in choosing a new rollout speed that will extend the fatigue life of the shuttle components that were affected by rollout.
The SWAT-generated input forces have subsequently been used as the force input for NASA's NASTRAN structural analysis of the mobile launch platform, emulating the test conditions. The correlation between the rollout-measured data and the predictions from the NASTRAN analysis has engendered confidence in both the SWAT-computed forces and the NASTRAN model of the MLP and solid rocket boosters, he says.
The analyses showed that the shuttle's vibration response can be much reduced when the driving frequencies are shifted away from the shuttle's own resonant natural frequencies. They helped NASA determine that merely reducing crawler speed from 0.9 mph to 0.8 mph would significantly reduce the vibrations in the shuttle by shifting the impact frequency of the crawler treads.
Source: Eurekalert & othersLast reviewed: By John M. Grohol, Psy.D. on 21 Feb 2009
Published on PsychCentral.com. All rights reserved.
Affirmations are like prescriptions for certain aspects of yourself you want to change.
-- Jerry Frankhauser
| 309
|
Constitution of Oregon: 2011 Version
Sec. 1. Election to accept or reject Constitution
2. Questions submitted to voters
3. Majority of votes required to accept or reject Constitution
4. Vote on certain sections of Constitution
5. Apportionment of Senators and Representatives
6. Election under Constitution; organization of state
7. Former laws continued in force
8. Officers to continue in office
9. Crimes against territory
10. Saving existing rights and liabilities
11. Judicial districts
Section 1. Election to accept or reject Constitution. For the purpose of taking the vote of the electors of the State, for the acceptance or rejection of this Constitution, an election shall be held on the second Monday of November, in the year 1857, to be conducted according to existing laws regulating the election of Delegates in Congress, so far as applicable, except as herein otherwise provided.
Section 2. Questions submitted to voters. Each elector who offers to vote upon this Constitution, shall be asked by the judges of election this question:
Do you vote for the Constitution? Yes, or No.
And also this question:
Do you vote for Slavery in Oregon? Yes, or No.
And in the poll books shall be columns headed respectively.
“Constitution, Yes.” “Constitution, No"
“Slavery, Yes." “Slavery, No."
And the names of the electors shall be entered in the poll books, together with their answers to the said questions, under their appropriate heads. The abstracts of the votes transmitted to the Secretary of the Territory, shall be publicly opened, and canvassed by the Governor and Secretary, or by either of them in the absence of the other; and the Governor, or in his absence the Secretary, shall forthwith issue his proclamation, and publish the same in the several newspapers printed in this State, declaring the result of the said election upon each of said questions. [Constitution of 1859; Amendment proposed by S.J.R. 7, 2001, and adopted by the people Nov. 5, 2002]
Section 3. Majority of votes required to accept or reject Constitution. If a majority of all the votes given for, and against the Constitution, shall be given for the Constitution, then this Constitution shall be deemed to be approved, and accepted by the electors of the State, and shall take effect accordingly; and if a majority of such votes shall be given against the Constitution, then this Constitution shall be deemed to be rejected by the electors of the State, and shall be void.–
Section 4. Vote on certain sections of Constitution. If this Constitution shall be accepted by the electors, and a majority of all the votes given for, and against slavery, shall be given for slavery, then the following section shall be added to the Bill of Rights, and shall be part of this Constitution:
“Sec. ___ “Persons lawfully held as slaves in any State, Territory, or District of the United States, under the laws thereof, may be brought into this State, and such Slaves, and their descendants may be held as slaves within this State, and shall not be emancipated without the consent of their owners.”
And if a majority of such votes shall be given against slavery, then the foregoing section shall not, but the following sections shall be added to the Bill of Rights, and shall be a part of this Constitution.
“Sec. ___ There shall be neither slavery, nor involuntary servitude in the State, otherwise than as a punishment for crime, whereof the party shall have been duly convicted.” [Constitution of 1859; Amendment proposed by S.J.R. 7, 2001, and adopted by the people Nov. 5, 2002]
Note: See sections 34 and 35 of Article I, Oregon Constitution.
Section 5. Apportionment of Senators and Representatives. Until an enumeration of the inhabitants of the State shall be made, and the senators and representatives apportioned as directed in the Constitution, the County of Marion shall have two senators, and four representatives.
Linn two senators, and four representatives.
Lane two senators, and three representatives.
Clackamas and Wasco, one senator jointly, and Clackamas three representatives, and Wasco one representative.
Yamhill one senator, and two representatives.
Polk one senator, and two representatives.
Benton one senator, and two representatives.
Multnomah, one senator, and two representatives.
Washington, Columbia, Clatsop, and Tillamook one senator jointly, and Washington one representative, and Washington and Columbia one representative jointly, and Clatsop and Tillamook one representative jointly.
Douglas, one senator, and two representatives.
Jackson one senator, and three representatives.
Josephine one senator, and one representative.
Umpqua, Coos and Curry, one senator jointly, and Umpqua one representative, and Coos and Curry one representative jointly. [Constitution of 1859; Amendment proposed by S.J.R. 7, 2001, and adopted by the people Nov. 5, 2002]
Section 6. Election under Constitution; organization of state. If this Constitution shall be ratified, an election shall be held on the first Monday of June 1858, for the election of members of the Legislative Assembly, a Representative in Congress, and State and County officers, and the Legislative Assembly shall convene at the Capital on the first Monday of July 1858, and proceed to elect two senators in Congress, and make such further provision as may be necessary to the complete organization of a State government.–
Section 7. Former laws continued in force. All laws in force in the Territory of Oregon when this Constitution takes effect, and consistent therewith, shall continue in force until altered, or repealed.–
Section 8. Officers to continue in office. All officers of the Territory of Oregon, or under its laws, when this Constitution takes effect, shall continue in office, until superseded by the State authorities.–
Section 9. Crimes against territory. Crimes and misdemeanors committed against the Territory of Oregon shall be punished by the State, as they might have been punished by the Territory, if the change of government had not been made.–
Section 10. Saving existing rights and liabilities. All property and rights of the Territory, and of the several counties, subdivisions, and political bodies corporate, of, or in the Territory, including fines, penalties, forfeitures, debts and claims, of whatsoever nature, and recognizances, obligations, and undertakings to, or for the use of the Territory, or any county, political corporation, office, or otherwise, to or for the public, shall inure to the State, or remain to the county, local division, corporation, officer, or public, as if the change of government had not been made. And private rights shall not be affected by such change.–
Section 11. Judicial districts. Until otherwise provided by law, the judicial districts of the State, shall be constituted as follows: The counties of Jackson, Josephine, and Douglas, shall constitute the first district. The counties of Umpqua, Coos, Curry, Lane, and Benton, shall constitute the second district.–The counties of Linn, Marion, Polk, Yamhill and Washington, shall constitute the third district.–The counties of Clackamas, Multnomah, Wasco, Columbia, Clatsop, and Tillamook, shall constitute the fourth district–and the County of Tillamook shall be attached to the county of Clatsop for judicial purposes.–
| 547
|
Welfare state in capitalism
Since the eruption of the sub-prime mortgage crises of 2007 to 2009, there have been hot debates on how the current “greedy capitalism” must change into sustainable capitalism.
The “Occupy Wall Street” protests were against the far-reaching negative consequences of neo-liberal capitalism.
In the wake of the global financial meltdown, the notion of Capitalism 4.0 has suddenly made headlines and the Davos Forum early this year also addressed how to make capitalism sustainable.
To what extent is this debate and the search for a new version of capitalism relevant to Korea?
Unlike advanced Western capitalist states with more than 300 years of history, Korea’s modern development of capitalism occurred in the past half century but is regarded as the representative economy of East Asian miracle performers.
Korea formally adopted capitalism in the post-liberation era after 1945 by formulating a constitution containing a market economy as its basic tenet to promote economic freedom and encourage creativity.
As a corollary of this fundamental principle, the Korean constitution dictates that the government can regulate and adjust economic matters to supplement market economic principles to achieve social justice and economic democracy.
Empirical analysis suggests that the Korean economy has achieved both growth and equity.
Ironically, at the outset of Korea’s modern economic development, the Korean War (1950-1953) allowed everyone in Korea to be on equal footing.
Furthermore, Korea, with the birth of a new democratic government, carried out two institutional reforms: land reform based on owner-cultivator principles and a compulsory education system for school-age children to be enrolled at a primary school, at the very least.
In the past half century, Korea’s rapid economic growth has evolved around the creation of chaebol (large conglomerates), administered credit allocation with easy monetary policies from banks to priority sectors and export-oriented outward-looking policies to concentrate economic power highly in favor of them.
However, in the post-1997/98 Asian financial crisis era, and with the recent global crisis, Korea’s compressed growth model has resulted in worsening income polarization and jobless growth.
Korea now faces serious challenges on its way to introducing a welfare state paradigm. As part of the campaigns for the general elections concluded last week, left-wing opposition parties introduced a series of free programs involving everything from school lunches, school tuition, universal welfare and college tuition reduction to tax hikes for the rich and a chaebol tax.
The conservative government party also put forward its own welfare policy platforms largely based on selective and prudential principles.
What type of welfare state Korea should design is likely to be a key issue in the December presidential election.
Korea should consider a welfare state as a distinct combination of democracy, welfare and capitalism, as implemented with different degrees of emphasis by Germany, all the Nordic countries, the Netherlands, New Zealand and the United Kingdom.
Given Korea’s transition to a matured advanced economy and ongoing income polarization, the country must address how to balance growth dynamism and equity.
Above all, the government should do its best to construct a social safety net for those in need.
Without due consideration of fiscal sustainability, we should avoid the popular approach to embrace “something for nothing.”
Korea may also consider privately channeled welfare measures within corporate social responsibility mandates as given in the United Nation’s new norm for shared growth.
In this regard, Korea’s large conglomerates need to be proactive in nurturing human capital for a high-tech society.
Indeed, highly skilled manpower is the core element to ensuring a sustainable endogenous growth model.
Koreans should keep in mind that the provision of welfare should not discourage the incentive to work. We should not let our next generation bear the burden of financing excessive welfare expenditure for the present generation.
We should be aware of the fact that capitalism is not a static set of institutions but an evolutionary system that reinvents and reinvigorates itself through challenges and crisis. The present case of Greece’s demise makes this clear.
| 611
|
Anti-Slavery Convention of American Women
The first Anti-Slavery Convention of American Women was held on May 9, 1837. Approximately 200 women gathered in New York City to discuss their role in the American abolition movement. Mary S. Parker was the President of the gathering. Other prominent women went on to be vocal members of the Women's Suffrage Movement, including Lucretia Mott, the Grimké sisters, and Lydia Maria Child. The attendees included women of color, the wives and daughters of slaveholders, and women of low economic status. The convention was a monumental step, both for the women's rights movement, and the abolition movement as a whole. Despite the event's significance, it receives very little historical attention.
| 501
|
This following chronology looks back at the problem of xenophobia since South Africa’s first democratic elections in 1994.
The Zulu-based Inkatha Freedom Party (IFP) threatens to take “physical action” if the government fails to respond to the perceived crisis of undocumented migrants in South Africa.
IFP leader and Minister of Home Affairs Mangosutho Buthelezi says in his first speech to parliament: “If we as South Africans are going to compete for scarce resources with millions of aliens who are pouring into South Africa, then we can bid goodbye to our Reconstruction and Development Programme.”
In December gangs of South Africans try to evict perceived “illegals” from Alexandra township, blaming them for increased crime, sexual attacks and unemployment. The campaign, lasting several weeks, is known as “Buyelekhaya” (Go back home).
A report by the Southern African Bishops’ Conference concludes: “There is no doubt that there is a very high level of xenophobia in our country … One of the main problems is that a variety of people have been lumped together under the title of ‘illegal immigrants’, and the whole situation of demonising immigrants is feeding the xenophobia phenomenon.”
Defence Minister Joe Modise links the issue of undocumented migration to increased crime in a newspaper interview.
In a speech to parliament, Home Affairs Minister Buthelezi claims “illegal aliens” cost South African taxpayers “billions of rands” each year.
A study co-authored by the Human Sciences Research Council and the Institute for Security Studies reports that 65 percent of South Africans support forced repatriation of undocumented migrants. White South Africans are found to be most hostile to migrants, with 93 percent expressing negative attitudes.
Local hawkers in central Johannesburg attack their foreign counterparts. The chairperson of the Inner Johannesburg Hawkers Committee is quoted as saying: “We are prepared to push them out of the city, come what may. My group is not prepared to let our government inherit a garbage city because of these leeches.”
A Southern African Migration Project (SAMP) survey of migrants in Lesotho, Mozambique and Zimbabwe shows that very few would wish to settle in South Africa. A related study of migrant entrepreneurs in Johannesburg finds that these street traders create an average of three jobs per business.
Three non-South Africans are killed by a mob on a train travelling between Pretoria and Johannesburg in what is described as a xenophobic attack.
In December The Roll Back Xenophobia Campaign is launched by a partnership of the South African Human Rights Commission (SAHRC), the National Consortium on Refugee Affairs and the United Nations High Commissioner for Refugees (UNHCR).
The Department of Home Affairs reports that the majority of deportations are of Mozambicans (141,506) followed by Zimbabweans (28,548)
A report by the SAHRC notes that xenophobia underpins police action against foreigners. People are apprehended for being “too dark” or “walking like a black foreigner”. Police also regularly destroy documents of black non-South Africans.
Sudanese refugee James Diop is seriously injured after being thrown from a train in Pretoria by a group of armed men. Kenyan Roy Ndeti and his room mate are shot in their home. Both incidents are described as xenophobic attacks.
In Operation Crackdown, a joint police and army sweep, over 7,000 people are arrested on suspicion of being illegal immigrants. In contrast, only 14 people are arrested for serious crimes.
A SAHRC report on the Lindela deportation centre, a holding facility for undocumented migrants, lists a series of abuses at the facility, including assault and the systematic denial of basic rights. The report notes that 20 percent of detainees claimed South African citizenship or that they were in the country legally.
According to the 2001 census, out of South Africa’s population of 45 million, just under one million foreigners are legally resident in the country. However, the Department of Home Affairs estimates there are more than seven million undocumented migrants.
Protests erupt at Lindela over claims of beatings and inmate deaths, coinciding with hearings into xenophobia by SAHRC and parliament’s portfolio committee on foreign affairs.
Cape Town’s Somali community claim that 40 traders have been the victims of targeted killings between August and September.
Somali-owned businesses in the informal settlement of Diepsloot, outside Johannesburg, are repeatedly torched.
In March UNHCR notes its concern over the increase in the number of xenophobic attacks on Somalis. The Somali community claims 400 people have been killed in the past decade.
In May more than 20 people are arrested after shops belonging to Somalis and other foreign nationals are torched during anti-government protests in Khutsong township, a small mining town about 50km southwest of Johannesburg. According to the International Organisation of Migration, 177,514 Zimbabweans deported from South Africa pass through their reception centre across the border in Beitbridge since its opening in May 2006.
In March human rights organisations condemn a spate of xenophobic attacks around Pretoria that leave at least four people dead and hundreds homeless.
Sources include: IRIN, Human Rights Watch, SAMP, SAHRC, Centre for the Study of Violence and Reconciliation
| 381
|
How to grow a reader
Tips for turning a fledgling reader into a voracious one.
By Peg Tyre
Once your child has figured out how to decode words and can actually read in a sustained way, then a chunk of his schooling should be focused on helping him squeeze meaning and richness out of the experience. You may remember the whole-language ideas about exposing kids to print through fiction, poetry, newspapers, and drama? It is the wrong way to teach kids to read. But getting kids excited about the written word is a great way to turn fledgling readers into voracious readers.
And here's where all parents should step up to the plate. You've been reading to your child, great. Don't stop. Books on tape in the car work, too. But now that she is a reader, surround her with print. Get a newspaper delivered. Get her a library card and make the library a regular stop, like the grocery store and the dry cleaner. And get over your view of what "proper" book reading looks like — fiction, nonfiction, comic books, how-tos, mysteries, sports biographies, magazines about current events, fast cars, sleek airplanes, or video gaming. Open the door wide. Find ways to bring what she is reading into the conversation. Ask questions like: What kind of book is it? What is the setting? What happens? What do you like/not like about the way the author writes?
Word by word
Similar but more formal versions of this should be happening at school, but parents can reinforce this learning at home. Watch for it. If your child is reading and sampling a wide enough variety of material, he will be encountering a lot of words in print that he doesn't know. He should be able to sound out unfamiliar words. First, encourage him to figure out the meaning of unfamiliar words from their context, for example, what could propulsion mean based on the words that came before and after it? Then, see if he can tease out the meaning of the word by finding its root. For instance, the word propel is hidden in propulsion and gives a strong hint for the meaning of the word.
Teachers help students build comprehension through the systemic study of words. Yes, weekly vocabulary words. Kids who study words — by this I mean systematically learning their meanings — have larger vocabularies but are also better readers. It's not too effective for the teacher to hand out a list of ten words and have kids look them up and then take a test. They need to hear the words, see them, speak them, and write them that week and in the weeks that follow.
Word lists alone, though, aren't enough. Kids encounter an average of three thousand new words a year — more than eight a day. Unless the entire school day is going to be given over to word study (and no one thinks this is a good idea), teachers must instruct children on how to shave off chunks of an unfamiliar word and tease out its meaning by studying suffixes, prefixes, and the meaning of common root words.
Comprehension, fluency, and stamina should be growing steadily stronger as your child moves through school. Schools need to ensure that happens. So do parents. Do your part.
| 483
|
Deaths in Moscow have doubled to an average of 700 people a day as the Russian capital is engulfed by poisonous smog from wildfires and a sweltering heat wave, a top health official said today, according to the Associated Press.
The Russian newspaper Pravda reported: “Moscow is suffocating. Thick toxic smog has been covering the sky above the city for days. The sun in Moscow looks like the moon during the day: it’s not that bright and yellow, but pale and orange with misty outlines against the smoky sky. Muscovites have to experience both the smog and sweltering heat at once.”
“Russia has recently seen the longest unprecedented heat wave for at least one thousand years, the head of the Russian Meteorological Center,” the news site Ria Novosti reported.
Various news sites report that foreign embassies have reduced activities or shut down, with many staff leaving Moscow to escape the toxic atmosphere.
Russian heatwave: This NASA map released today shows areas of Russia experiencing above-average temperatures this summer (orange and red). The map was released on NASA’s Earth Obervatory website.
NASA Earth Observatory image by Jesse Allen, based on MODIS land surface temperature data available through the NASA Earth Observations (NEO) Website. Caption by Michon Scott.
According to NASA:
In the summer of 2010, the Russian Federation had to contend with multiple natural hazards: drought in the southern part of the country, and raging fires in western Russia and eastern Siberia. The events all occurred against the backdrop of unusual warmth. Bloomberg reported that temperatures in parts of the country soared to 42 degrees Celsius (108 degrees Fahrenheit), and the Wall Street Journal reported that fire- and drought-inducing heat was expected to continue until at least August 12.
This map shows temperature anomalies for the Russian Federation from July 20-27, 2010, compared to temperatures for the same dates from 2000 to 2008. The anomalies are based on land surface temperatures observed by the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA’s Terra satellite. Areas with above-average temperatures appear in red and orange, and areas with below-average temperatures appear in shades of blue. Oceans and lakes appear in gray.
Not all parts of the Russian Federation experienced unusual warmth on July 20-27, 2010. A large expanse of northern central Russia, for instance, exhibits below-average temperatures. Areas of atypical warmth, however, predominate in the east and west. Orange- and red-tinged areas extend from eastern Siberia toward the southwest, but the most obvious area of unusual warmth occurs north and northwest of the Caspian Sea. These warm areas in eastern and western Russia continue a pattern noticeable earlier in July, and correspond to areas of intense drought and wildfire activity.
Bloomberg reported that 558 active fires covering 179,596 hectares (693 square miles) were burning across the Russian Federation as of August 6, 2010. Voice of America reported that smoke from forest fires around the Russian capital forced flight restrictions at Moscow airports on August 6, just as health officials warned Moscow residents to take precautions against the smoke inhalation.
Posted by David Braun
Earlier related post: Russia burns in hottest summer on record (July 28, 2010)
Talk about tough: These guys throw themselves out of 50-year-old aircraft into burning Siberian forests. (National Geographic Magazine feature, February 2008)
Photo by Mark Thiessen
Join Nat Geo News Watch community
Readers are encouraged to comment on this and other posts–and to share similar stories, photos and links–on the Nat Geo News Watch Facebook page. You must sign up to be a member of Facebook and a fan of the blog page to do this.
Leave a comment on this page
You may also email David Braun ([email protected]) if you have a comment that you would like to be considered for adding to this page. You are welcome to comment anonymously under a pseudonym.
| 211
|
The formation of counties was one of the first matters attended to by the Lords Proprietors after they received their charter in 1663 from King Charles II for the vast tract of land in America he called the province of Carolina. In 1664 the Proprietors formed "all that parte of the province which lyeth on the north east side or starboard side entering of the river Chowan now named by us Albemarle River together with the Islands and Isletts within tenn leagues thereof" into a county that they named Albemarle County for George Monck, the duke of Albemarle, himself one of the Proprietors. This was the site of the first permanent settlement in Carolina. They then divided the new county into four precincts: Currituck , Perquimans , Pasquotank , and Chowan . Albemarle County was subsequently enlarged, and in 1696 the area south of Albemarle Sound was removed from Albemarle and made into a new county, named Bath, which in turn was divided into the precincts of Beaufort , Hyde , Craven and Carteret .
The primary reason for establishing counties (or precincts) was to provide local seats of government where citizens could record documents, such as deeds or wills, and participate in court proceedings. At the same time, the sheriff was provided with a home base from which to fulfill his basic responsibilities of collecting taxes and maintaining law and order.
By 1738 Albemarle and Bath Counties had been dissolved and the 14 precincts then in existence became counties, a designation that has remained since the seventeenth century. Throughout the remainder of the colonial period, as settlement spread westward and population increased, older counties were divided and new ones formed. With statehood came an even greater rate of growth, and by 1800 the number had risen to 59 counties covering all of the state. In many cases, the dividing of counties caused heated political controversy, as eastern counties were often divided to maintain that region's majority in the state legislature against expanding representation from the piedmont and mountain regions. Shifts in population continued throughout the nineteenth century and into the twentieth century, resulting in even more counties. Larger counties were divided, and those in turn were sometimes divided yet again, until the seemingly magical figure of 100 was reached in 1911. (For a time, the number of counties was actually greater than 100, but some of these were ceded to Tennessee in 1789 and others were absorbed into other counties or never fully developed.) The number remained at 100, although in 1933 the General Assembly authorized the consolidation of existing counties subject to approval of the electorate. This could have resulted for the first time in a decrease from the 100 county figure, but as of the early 2000s there had been no such consolidations.
Initially, county government and judicial matters were in the hands of justices of the peace , who formed a body known as the Court of Pleas and Quarter Sessions . The justices were appointed by the governor, with strong input from the members of the colonial Assembly from the affected county, leaving the average citizen with no say as to who would run the government of the county in which he lived. At first the Court of Pleas and Quarter Sessions met wherever it was convenient to assemble a quorum, usually in a private home. A 1722 act of the Assembly instructed the justices to pick a site for a permanent seat of government for each precinct, where they were to buy an acre of land and build a courthouse. Whether in the early precinct days, or after the name of the local government entity was changed from precinct to county, the justices had the support of a sheriff for law enforcement, as well as a clerk of court and a register of deeds. Of the three, both the clerk of court and the register of deeds needed to remain in their offices in the courthouse, which left only the sheriff free to travel about the county. Accordingly, he was also designated tax collector, a position sheriffs continued to hold until the latter part of the twentieth century.
The general system of county government of the early colonial period, with the appointed members of the Court of Pleas and Quarter Sessions running things, was carried over into statehood, and little changed until the adoption of the North Carolina Constitution of 1868 . The system called for by the new constitution, known as the Township and County Commissioner Plan, gave control of county government to five commissioners, to be elected at large by the county's voters. In addition, each county was divided into townships whose residents elected two justices to serve as the township's governing body, as well as a three-member school committee and a constable. The new system significantly reduced the General Assembly's control of county government, since the legislators no longer appointed the justices of the peace who made up the county court.
The Township and County Commissioner Plan, patterned after one previously adopted in Pennsylvania, did not prove universally popular in North Carolina and lasted less than a decade. At a constitutional convention in 1875, the General Assembly was authorized to change the system, and in the session of 1877 townships were reduced to little more than geographic and administrative subdivisions of the counties. This seriously reduced the authority of county commissioners.
The modern system of county government, in which an elected board of commissioners is responsible for managing a county's affairs, including setting the rate and collecting taxes and determining where funds should be expended, dates to the early twentieth century. Periodically after that, the General Assembly conferred additional authority and responsibility on the county commissioners, until at the end of the century they had been provided with such a wide range of "home-rule" statutes that many counties found it impossible to run their greatly expanded business without professional help. This led to the adoption by many counties of the County Manager Plan. Under this plan, commissioners employ a county manager to serve as a sort of chief executive of the county business-in some instances, the largest business in the county-with the manager having certain independent authority, including that of hiring and firing employees.
As with other matters, the state determines what sources the counties may tap for income. Traditionally, the real estate tax has been the primary revenue source for North Carolina counties. However, especially in the last half of the twentieth century, counties were able to prevail on the General Assembly to let them collect from a variety of other sources, among those favored being local sales taxes, land transfer taxes, meals taxes, and occupancy taxes.
A. Fleming Bell, ed., County Government in North Carolina (3rd ed., 1989).
David LeRoy Corbitt, The Formation of the North Carolina Counties, 1663-1943 (1969).
1 January 2006 | Stick, David
| 626
|
There are really two decisions that must be made regarding the hidden layers: how many hidden layers to actually have in the neural network and how many neurons will be in each of these layers. We will first examine how to determine the number of hidden layers to use with the neural network.
Problems that require two hidden layers are rarely encountered. However, neural networks with two hidden layers can represent functions with any kind of shape. There is currently no theoretical reason to use neural networks with any more than two hidden layers. In fact, for many practical problems, there is no reason to use any more than one hidden layer. Table 5.1 summarizes the capabilities of neural network architectures with various hidden layers.
Table 5.1: Determining the Number of Hidden Layers
|Number of Hidden Layers||Result|
|none||Only capable of representing linear separable functions or decisions.|
|1||Can approximate any function that contains a continuous mapping from one finite space to another.|
|2||Can represent an arbitrary decision boundary to arbitrary accuracy with rational activation functions and can approximate any smooth mapping to any accuracy.|
Deciding the number of hidden neuron layers is only a small part of the problem. You must also determine how many neurons will be in each of these hidden layers. This process is covered in the next section.
Deciding the number of neurons in the hidden layers is a very important part of deciding your overall neural network architecture. Though these layers do not directly interact with the external environment, they have a tremendous influence on the final output. Both the number of hidden layers and the number of neurons in each of these hidden layers must be carefully considered.
Using too few neurons in the hidden layers will result in something called underfitting. Underfitting occurs when there are too few neurons in the hidden layers to adequately detect the signals in a complicated data set.
Using too many neurons in the hidden layers can result in several problems. First, too many neurons in the hidden layers may result in overfitting. Overfitting occurs when the neural network has so much information processing capacity that the limited amount of information contained in the training set is not enough to train all of the neurons in the hidden layers. A second problem can occur even when the training data is sufficient. An inordinately large number of neurons in the hidden layers can increase the time it takes to train the network. The amount of training time can increase to the point that it is impossible to adequately train the neural network. Obviously, some compromise must be reached between too many and too few neurons in the hidden layers.
There are many rule-of-thumb methods for determining the correct number of neurons to use in the hidden layers, such as the following:
These three rules provide a starting point for you to consider. Ultimately, the selection of an architecture for your neural network will come down to trial and error. But what exactly is meant by trial and error? You do not want to start throwing random numbers of layers and neurons at your network. To do so would be very time consuming. Chapter 8, “Pruning a Neural Network” will explore various ways to determine an optimal structure for a neural network.
| 23
|
new zealand curriculum
The New Zealand Curriculum is
built around the acquisition of essential academic and practical
skills. It identifies 7 academic or essential
These are balanced by 8 practical
or essential skills:
- Language and languages
- Social sciences
- The arts
- Health & physical
- Communication skills
- Numeracy skills
- Information skills
- Problem-solving skills
- Self-management and
- Social and co-operative
- Physical skills
- Work and study skills
Each term, most schools prepare
student Progress Reports and hold parent-teacher evenings.
Subjects Taught At New Zealand Schools
The following is a general list of subjects taught in
New Zealand schools. Not all schools offer all the subjects
listed and others may offer additional disciplines. Some subjects
||Agriculture & Horticulture |
||Business Studies |
||Classical Studies |
||Media Studies |
||Physical Education |
||Social Studies |
||Graphics & Design |
||Clothing & Design
The school year begins in late January or early
February, after a summer holiday of about 6 weeks, and ends in
December. It is divided into 4 terms with breaks of two to three
weeks between them.
Secondary school students have slightly longer holidays
then primary school students.
Check with your local New Zealand school for actual term dates,
however the terms usually run as follows:
Term 1 - End of January to early April
Term 2 - Late April to end of June
Term 3 - Mid July to late September
Term 4 - Mid October to mid December (or early December for
New Zealand’s qualifications system is changing from traditional examination based awards to standards based qualifications. In 2002, level 1 of the National Certificate of Educational Achievement (NCEA) replaced School Certificate. The NCEA will replace Sixth Form Certificate in 2003 and University Bursaries in 2004.
National Certificate of Educational Achievement
NCEA (National Certificate of Educational Achievement) is New Zealand's main national qualification for secondary school students and part of the National Qualifications Framework.
The Qualifications Framework covers industry and education qualifications from year 11 (formerly Form 5) of secondary schooling and entry level to vocations, through to post-graduate level.
All qualifications currently on the Framework are made up of national standards. A standard describes what a learner should aim to achieve in a skill or knowledge area. Standards are set by written criteria along with a national moderation system. Learners who meet all requirements get credit for that standard; those who don't may be reassessed when they are ready.
Each standard is at a level from 1 to 8. Level 1 is similar to School Certificate level; level 2 to Sixth Form Certificate; levels 3 and 4 are similar to University Bursaries. Each standard also has a credit rating.
Schools can also use many standards from beyond the school curriculum. Any number or combination of standards may be assessed within a course, so schools can develop courses to suit their students.
Students accumulate Framework credits towards National Certificates and National Diplomas. As well as being able to work towards a range of National Certificates, eg, National Certificate in Computing, from 2002 school students will work towards a general qualification, the National Certificate of Educational Achievement (NCEA). Students can start on Framework qualifications at school and carry on in the workplace or tertiary studies.
NCEA provides the pathway to tertiary education and workplace training and gives everyone a full picture of what students know and can do.
- Challenges students of all abilities, in all learning areas
- Reports more details about a student's achievement
- Is officially recognised in New Zealand and internationally
- Is recognised by employers, universities and polytechnics and used as the benchmark for selection
- Provides opportunities to begin studying for tertiary and industry qualifications * Enables students to gain credits from traditional school curriculum areas AND alternative school curriculum programmes
- Has exams as well as internal assessment
- Has a national system for checking internal assessments
- Shows credits and grades for separate skills and knowledge in some standards
The National Qualifications Framework contains two types of national standards: achievement standards and unit standards. Credits from all achievement standards and all unit standards count towards NCEA.
Choosing A School
Most New Zealand students attend state-funded schools
and every student has the right to enrol at the state school
nearest to their home. If the school is at risk of overcrowding,
it can set a home zone that is geographically
defined. Students living in this zone have the right to go to
that school. Those living outside the zone can be enrolled only
under special circumstances. These include situations where
students have brothers or sisters attending the school or require
access to special programmes such as special education or Maori
language. If the school is still at risk of over-crowding,
selection is made through a supervised ballot.
ERO reports are available at no charge from New Zealand schools
and ERO offices.
Families also have the right to visit schools and meet with the
principal and staff before deciding to enrol their children as
State schools are fully funded by the Government. At
primary and intermediate level they are co-educational with
classes that include both boys and girls. Both co-educational and
single-sex schooling is available at secondary level.
State schools do not charge fees, however parents are expected to
make donations towards the support of special programmes or
services. These are also charges for stationery and uniforms.
Meals are not provided but snacks can generally be purchased from
the school Tuck Shop, but many parents prefer to
provide a packed lunch.
The term integrated schools generally refers
to schools with a religious focus - usually Roman Catholic
in denomination that used to operate as private institutions. In
recent years, these schools have been integrated into the state
system, hence the name, integrated schools, and receive
government funding. Although they follow the state curriculum
requirements, all have retained their special religious or
philosophical character. A small number of institutions, such as
Montessori or Rudolf Steiner schools, are secular in character.
Private or independent schools receive only limited
government funding and are almost entirely dependent on income
derived from student fees. There are no standard fees as each
school determines its own fee scale. Fees also vary according to
levels, with fees in Years 12 and 13 usually significantly higher
than those charged in Years 9 and 10.
Fees at primary schools also vary according to level, although
these are generally much lower than secondary school fees.
Private schools are governed by their own independent boards but
must meet government standards in order to be registered and they
are also subject to the same ERO audits as state schools.
Boarding schools exist mainly at secondary school level.
Currently 78 state and integrated schools and 24 private schools
have boarding arrangements.
The Correspondence School teaches a full range of school
Home-based schooling must meet the same standards as
registered schools, and approval to exempt the student from
regular schooling must be obtained from the Ministry of
A small annual grant is available for teaching materials.
Home schooling accounts for less than 1% of school enrolments.
Most schools require students to wear a uniform at all
times unless the school has an optional uniform policy. School
uniforms are sold by most major department stores and some
schools also operate their own Uniform Shops and sell both new
and second-hand items.
Teachers are not allowed to physically punish students
in their care. Legal disciplinary methods include removal of
privileges, extra homework or detention. Parents or guardians are
advised in advance if a child is given detention, as this will
require the child to stay at school for a specified time after
the end of the standard school day.
For serious offences, students may be suspended from school for a
period of time and if they are over 16 years of age, they can be
expelled permanently. Expulsion generally occurs when a
students conduct either sets a dangerous example to other
students or threatens their safety. There are formal procedures
for suspending or expelling a student.
Most secondary and primary schools expect students to do
homework. Each school has its own rules on the amount and type of
Parents or guardians are responsible for ensuring that a
child can get to school. Each year about 100,000 children use
school buses. Although school bus services are contracted by the
Ministry of Education, students are expected to meet the costs of
If a child has to travel a long distance to school, and there is
no public transport or school bus service, financial assistance
can be provided. Financial assistance and/or bus and taxi
services are provided for special education students.
If you plan to change schools, the principal of your
childs current school should be informed as soon as
Transfer involving a change in the level of schooling, such as
from primary to intermediate or intermediate to secondary,
require additional documentation. Details of application
procedures are provided by the school the student plans to
transfer to. Most intermediate and secondary schools have open
| 655
|
Behaviour-driven development is an “outside-in” methodology. It starts at the outside by identifying business outcomes, and then drills down into the feature set that will achieve those outcomes. Each feature is captured as a “story”, which defines the scope of the feature along with its acceptance criteria. This article introduces the BDD approach to defining and identifying stories and their acceptance criteria.
Software delivery is about writing software to achieve business outcomes. It sounds obvious, but often political or environmental factors distract us from remembering this. Sometimes software delivery can appear to be about producing optimistic reports to keep senior management happy, or just creating “busy work” to keep people in paid employment, but that’s a topic for another day.
Usually, the business outcomes are too coarse-grained to be used to directly write software (where do you start coding when the outcome is “save 5% of my operating costs”?) so we need to define requirements at some intermediate level in order to get work done.
Behaviour-driven development (BDD) takes the position that you can turn an idea for a requirement into implemented, tested, production-ready code simply and effectively, as long as the requirement is specific enough that everyone knows what’s going on. To do this, we need a way to describe the requirement such that everyone – the business folks, the analyst, the developer and the tester – have a common understanding of the scope of the work. From this they can agree a common definiton of “done”, and we escape the dual gumption traps of “that’s not what I asked for” or “I forgot to tell you about this other thing”.
This, then, is the role of a Story. It has to be a description of a requirement and its business benefit, and a set of criteria by which we all agree that it is “done”. This is a more rigorous definition than in other agile methodologies, where it is variously described as a “promise of a conversation” or a “description of a feature”. (A BDD story can just as easily describe a non-functional requirement, as long as the work can be scoped, estimated and agreed on.)
The structure of a story
BDD provides a structure for a story. This is not mandatory – you can use a different story format and still be doing BDD – but I am presenting it here because it has been proven to work on many projects of all shapes and sizes. At the very least, your story should contain all of the elements described in the template. The story template looks like this:
Title (one line describing the story) Narrative: As a [role] I want [feature] So that [benefit] Acceptance Criteria: (presented as Scenarios) Scenario 1: Title Given [context] And [some more context]... When [event] Then [outcome] And [another outcome]... Scenario 2: ...
Telling the story
A story should be the product of a conversation involving several people. A business analyst talks to a business stakeholder1 about the feature or requirement, and helps them to frame it as a story narrative. Then a tester helps define the scope of the story – in the form of acceptance criteria – by determining which scenarios matter and which are less useful. A technical representitive will provide a ballpark estimate of the amount of work involved in the story, and to propose alternative approaches. Many great ideas for systems come from the people developing them as well as the people who asked for them in the first place.
This will likely be an iterative process. The stakeholder will have an idea of what they want but will usually not know how much work will be involved, or how that work will be allocated. With the help of the technical and testing experts, they will understand the cost/benefit trade-off of each scenario and can make a judgement about whether they want it. Of course this is also balanced up against the other requirements: is it better to cover more edge cases in this story, or to move on to another story?
Sometimes the development team will simply not know enough to be able to make even a ballpark estimate. In this case they may choose to carry out some investigative work, known as a “spike”, in order to understand more about the requirement. (I will cover planning in more detail in a future article.)
The characteristics of a good story
Using the example from the article Introducing BDD, let’s look at the requirements for withdrawing cash from an ATM:
Story: Account Holder withdraws cash As an Account Holder I want to withdraw cash from an ATM So that I can get money when the bank is closed Scenario 1: Account has sufficient funds Given the account balance is \$100 And the card is valid And the machine contains enough money When the Account Holder requests \$20 Then the ATM should dispense \$20 And the account balance should be \$80 And the card should be returned Scenario 2: Account has insufficient funds Given the account balance is \$10 And the card is valid And the machine contains enough money When the Account Holder requests \$20 Then the ATM should not dispense any money And the ATM should say there are insufficient funds And the account balance should be \$20 And the card should be returned Scenario 3: Card has been disabled Given the card is disabled When the Account Holder requests \$20 Then the ATM should retain the card And the ATM should say the card has been retained Scenario 4: The ATM has insufficient funds ...
As you can see, there are a number of scenarios to consider, some related to the account balance, others about the card and yet others about the ATM itself. Let’s dissect the story to determine whether it is any good.
The title should describe an activity
The title of the story, “Account Holder withdraws cash”, describes an activity that the account holder wants to carry out. Until we implement this feature, the account holder won’t be able to withdraw money from the ATM. Once we have delivered it, they will. That gives us an obvious starting point for determining what “done” looks like.
If we had a title like “Account management” or “ATM behaviour”, we would have to look harder to understand when we were finished, and the edges would be a lot more fuzzy. For instance, “account management” might incorporate applying for a loan, and “ATM behaviour” might encompass changing the PIN number on my cash card. The story title should always describe actual behaviour by a user of the system.
The narrative should include a role, a feature and a benefit
The template “As a [role] I want [feature] so that [benefit]“ has a number of advantages. By specifying the role within the narrative, you know who to talk to about the feature. By specifying the benefit, you cause the story writer to consider why they want a feature.
It gets interesting if you find the feature won’t actually deliver the benefit attributed to it. This usually means you have a missing story. There is one story with the current feature, which delivers a different benefit (and is therefore still useful), and a hidden story whereby you will need a different feature to deliver the benefit described.
The example story tells us there is an Account Holder who cares about the feature being delivered, so we know where to start exploring what it should do.
The scenario title should say what’s different
You should be able to line up the scenarios side by side, and describe how they differ using only the title. In our example, we can see that the scenario descriptions say only what’s different between each scenario. You don’t need to say “an account holder withdraws money from an account with insufficient funds and is told they are unable to fulfull the transaction”. It’s obvious from the title whether this is the scenario you care about, compared to the others.
The scenario should be described in terms of Givens, Events and Outcomes
This is the single most powerful behavioural shift I have seen in teams adopting BDD. Simply by getting the business users, the analysts, the testers and the developers to adopt this vocabulary of “given/when/then”, they discover that a world of ambiguity falls away.
Not all scenarios are this simple. Some are best represented as a sequence of events, described as: given [some context] when [I do something] then [this happens] when [I do another thing] then [this new thing happens] and so on. An example is a wizard-style website, where you step through a sequence of screens to build up a complex data model. It is perfectly appropriate to intermingle sequences of events and outcomes, as long as you get into the habit of thinking in these terms.
One interesting emergent behaviour is that the quality of the conversation changes. You will quickly discover that you have missed out an assumed given (“well of course the account is overdrawn”), or forgotten to verify an outcome (“well naturally the account holder gets their card back”). I observed this on one particular project where the lead developer told me he could sense the analysts and developers were talking at cross purposes, but couldn’t see a way to demonstrate this to them. Within a few days of introducing the given/when/then vocabulary, he could see a dramatic improvement in the quality of their interactions.
The givens should define all of, and no more than, the required context
Any additional givens are distracting, which makes it hard for someone looking at the story for the first time – whether from the technical or business side – to understand what they need to know. Similarly any missing givens are really assumptions. If you can get a different outcome from the givens provided, then there must be something missing.
In the example, the first scenario says something about the account balance, the card and the ATM itself. All of these are required to fully define the scenario. In the third scenario, we don’t say anything about the account balance or whether the ATM has any money. This implies that the machine will retain the card whatever the account balance, and whatever the state of the ATM.
The event should describe the feature
The event itself should be very simple, typically only a single call into the production code. As discussed above, some scenarios are more complicated than this, but mostly the scenarios for a story will revolve around a single event. They will differ only in the context (the givens) and the corresponding expected outcomes.
The story should be small enough to fit in an iteration
There are no hard and fast rules about how you do this, as long as you break it down into demonstrable chunks. In general if there are more than about five or six scenarios, a story can probably be broken down by grouping similar scenarios together.
We can’t tell from the ATM example how many more scenarios there are for this story, but I am suspicious that there may be several more. Essentially we have three “moving parts” in this story, namely the account balance, the status of the cash card and the state of the ATM. We could get into more detail with the cash card: what if it is out of date, so I can’t use it to withdraw cash but the ATM still returns it to me? What if the ATM breaks partway through the transaction? What if my account has an overdraft facility?
It might be better to break the story out into several smaller stories:
- Account Holder withdraws cash (assumptions: ATM is working and card is valid)
- Account Holder withdraws cash with invalid card (assumptions: ATM is working)
- Account Holder withdraws cash from flaky ATM (assumptions: card is valid)
Although this may seem artificial, it allows you to demonstrate progress in business terms and gives you more data points to track with. The important thing is always to break the story out along business lines by scenarios (and making the assumptions explicit) rather than along technical lines (e.g. doing the database stuff in this iteration and the GUI stuff in the next iteration). This is so the business can see demonstrable progress on their own terms rather than just taking your word for it.
So how is this different from Use Cases?
There are use cases and there are use cases. I’m a huge fan of the way Alistair Cockburn describes use cases (as opposed to the over-engineered versions I have encountered on RUP-as-waterfall projects). Given that I don’t have much experience on use case-driven projects, I’ll leave it to others to draw the comparisons.
Certainly I agree with his process of starting at the lower precision (of an outcome, or goal) and working towards higher precisions, taking in more exception scenarios as you go. In BDD, this means starting with the business outcome, and working through high level functional areas to drill into specific stories with acceptance criteria.
In reality, it doesn’t matter what your process is to identify and elaborate requirements. It’s fine for you to write requirements documents if that helps you organise your thoughts. It’s not fine, however, to pass those documents along as if they actually encapsulated all your thoughts, because they don’t. Instead, you should put the requirements document, or stack of use cases, to one side and start over defining stories from business outcomes, safe in the knowledge that your hard work has meant that you have all the answers in your head – or at least a good enough understanding to outline the work as you currently see it.
Behaviour-driven development uses a story as the basic unit of functionality, and therefore of delivery. The acceptance criteria are an intrinsic part of the story – in effect they define the scope of its behaviour, and give us a shared definition of “done”. They are also used as the basis for estimation when we come to do our planning.
Most importantly, the stories are the result of conversations between the project stakeholders, business analysts, testers and developers. BDD is as much about the interactions between the various people in the project as it is about the outputs of the development process.
1. Actually it should be the person who actually cares about the feature, so it may equally be an operational, legal or security person depending on the story.
| 436
|
Opportunities and Challenges in High Pressure Processing of Foods By Rastogi, N K; Raghavarao, K S M S; Balasubramaniam, V M; Niranjan, K; Knorr, D Consumers increasingly demand convenience foods of the highest quality in terms of natural flavor and taste, and which are free from additives and preservatives. This demand has triggered the need for the development of a number of nonthermal approaches to food processing, of which high-pressure technology has proven to be very valuable. A number of recent publications have demonstrated novel and diverse uses of this technology. Its novel features, which include destruction of microorganisms at room temperature or lower, have made the technology commercially attractive. Enzymes and even spore forming bacteria can be inactivated by the application of pressure-thermal combinations, This review aims to identify the opportunities and challenges associated with this technology. In addition to discussing the effects of high pressure on food components, this review covers the combined effects of high pressure processing with: gamma irradiation, alternating current, ultrasound, and carbon dioxide or anti-microbial treatment. Further, the applications of this technology in various sectors-fruits and vegetables, dairy, and meat processing-have been dealt with extensively. The integration of high-pressure with other matured processing operations such as blanching, dehydration, osmotic dehydration, rehydration, frying, freezing / thawing and solid- liquid extraction has been shown to open up new processing options. The key challenges identified include: heat transfer problems and resulting non-uniformity in processing, obtaining reliable and reproducible data for process validation, lack of detailed knowledge about the interaction between high pressure, and a number of food constituents, packaging and statutory issues. Keywords high pressure, food processing, non-thermal processing Consumers demand high quality and convenient products with natural flavor and taste, and greatly appreciate the fresh appearance of minimally processed food. Besides, they look for safe and natural products without additives such as preservatives and humectants. In order to harmonize or blend all these demands without compromising the safety of the products, it is necessary to implement newer preservation technologies in the food industry. Although the fact that “high pressure kills microorganisms and preserves food” was discovered way back in 1899 and has been used with success in chemical, ceramic, carbon allotropy, steel/alloy, composite materials and plastic industries for decades, it was only in late 1980′s that its commercial benefits became available to the food processing industries. High pressure processing (HPP) is similar in concept to cold isostatic pressing of metals and ceramics, except that it demands much higher pressures, faster cycling, high capacity, and sanitation (Zimmerman and Bergman, 1993; Mertens and Deplace, 1993). Hite (1899) investigated the application of high pressure as a means of preserving milk, and later extended the study to preserve fruits and vegetables (Hite, Giddings, and Weakly, 1914). It then took almost eighty years for Japan to re- discover the application of high-pressure in food processing. The use of this technology has come about so quickly that it took only three years for two Japanese companies to launch products, which were processed using this technology. The ability of high pressure to inactivate microorganisms and spoilage catalyzing enzymes, whilst retaining other quality attributes, has encouraged Japanese and American food companies to introduce high pressure processed foods in the market (Mermelstein, 1997; Hendrickx, Ludikhuyze, Broeck, and Weemaes, 1998). The first high pressure processed foods were introduced to the Japanese market in 1990 by Meidi-ya, who have been marketing a line of jams, jellies, and sauces packaged and processed without application of heat (Thakur and Nelson, 1998). Other products include fruit preparations, fruit juices, rice cakes, and raw squid in Japan; fruit juices, especially apple and orange juice, in France and Portugal; and guacamole and oysters in the USA (Hugas, Garcia, and Monfort, 2002). In addition to food preservation, high- pressure treatment can result in food products acquiring novel structure and texture, and hence can be used to develop new products (Hayashi, 1990) or increase the functionality of certain ingredients. Depending on the operating parameters and the scale of operation, the cost of highpressure treatment is typically around US$ 0.05-0.5 per liter or kilogram, the lower value being comparable to the cost of thermal processing (Thakur and Nelson, 1998; Balasubramaniam, 2003). The non-availability of suitable equipment encumbered early applications of high pressure. However, recent progress in equipment design has ensured worldwide recognition of the potential for such a technology in food processing (Could, 1995; Galazka and Ledward, 1995; Balci and Wilbey, 1999). Today, high-pressure technology is acknowledged to have the promise of producing a very wide range of products, whilst simultaneously showing potential for creating a new generation of value added foods. In general, high-pressure technology can supplement conventional thermal processing for reducing microbial load, or substitute the use of chemical preservatives (Rastogi, Subramanian, and Raghavarao, 1994). Over the past two decades, this technology has attracted considerable research attention, mainly relating to: i) the extension of keeping quality (Cheftel, 1995; Farkas and Hoover, 2001), ii) changing the physical and functional properties of food systems (Cheftel, 1992), and iii) exploiting the anomalous phase transitions of water under extreme pressures, e.g. lowering of freezing point with increasing pressures (Kalichevsky, Knorr, and Lillford, 1995; Knorr, Schlueter, and Heinz, 1998). The key advantages of this technology can be summarized as follows: 1. it enables food processing at ambient temperature or even lower temperatures; 2. it enables instant transmittance of pressure throughout the system, irrespective of size and geometry, thereby making size reduction optional, which can be a great advantage; 3. it causes microbial death whilst virtually eliminating heat damage and the use of chemical preservatives/additives, thereby leading to improvements in the overall quality of foods; and 4. it can be used to create ingredients with novel functional properties. The effect of high pressure on microorganisms and proteins/ enzymes was observed to be similar to that of high temperature. As mentioned above, high pressure processing enables transmittance of pressure rapidly and uniformly throughout the food. Consequently, the problems of spatial variations in preservation treatments associated with heat, microwave, or radiation penetration are not evident in pressure-processed products. The application of high pressure increases the temperature of the liquid component of the food by approximately 3C per 100 MPa. If the food contains a significant amount of fat, such as butter or cream, the temperature rise is greater (8-9C/100 MPa) (Rasanayagam, Balasubramaniam, Ting, Sizer, Bush, and Anderson, 2003). Foods cool down to their original temperature on decompression if no heat is lost to (or gained from) the walls of the pressure vessel during the holding stage. The temperature distribution during the pressure-holding period can change depending on heat transfer across the walls of the pressure vessel, which must be held at the desired temperature for achieving truly isothermal conditions. In the case of some proteins, a gel is formed when the rate of compression is slow, whereas a precipitate is formed when the rate is fast. High pressure can cause structural changes in structurally fragile foods containing entrapped air such as strawberries or lettuce. Cell deformation and cell damage can result in softening and cell serum loss. Compression may also shift the pH depending on the imposed pressure. Heremans (1995) indicated a lowering of pH in apple juice by 0.2 units per 100 MPa increase in pressure. In combined thermal and pressure treatment processes, Meyer (2000) proposed that the heat of compression could be used effectively, since the temperature of the product can be raised from 70-90C to 105-120C by a compression to 700 MPa, and brought back to the initial temperature by decompression. As a thermodynamic parameter, pressure has far-reaching effects on the conformation of macromolecules, the transition temperature of lipids and water, and a number of chemical reactions (Cheftel, 1992; Tauscher, 1995). Phenomena that are accompanied by a decrease in volume are enhanced by pressure, and vice-versa (principle of Le Chatelier). Thus, under pressure, reaction equilibriums are shifted towards the most compact state, and the reaction rate constant is increased or decreased, depending on whether the “activation volume” of the reaction (i.e. volume of the activation complex less volume of reactants) is negative or positive. It is likely that pressure a\lso inhibits the availability of the activation energy required for some reactions, by affecting some other energy releasing enzymatic reactions (Farr, 1990). The compression energy of 1 litre of water at 400 MPa is 19.2 kJ, as compared to 20.9 kJ for heating 1 litre of water from 20 to 25C. The low energy levels involved in pressure processing may explain why covalent bonds of food constituents are usually less affected than weak interactions. Pressure can influence most biochemical reactions, since they often involve change in volume. High pressure controls certain enzymatic reactions. The effect of high pressure on protein/enzyme is reversible unlike temperature, in the range 100-400 MPa and is probably due to conformational changes and sub-unit dissociation and association process (Morild, 1981). For both the pasteurization and sterilization processes, a combined treatment of high pressure and temperature are frequently considered to be most appropriate (Farr, 1990; Patterson, Quinn, Simpson, and Gilmour, 1995). Vegetative cells, including yeast and moulds, are pressure sensitive, i.e. they can be inactivated by pressures of ~300-600 MPa (Knorr, 1995; Patterson, Quinn, Simpson, and Gilmour, 1995). At high pressures, microbial death is considered to be due to permeabilization of cell membrane. For instance, it was observed that in the case of Saccharomyces cerevasia, at pressures of about 400 MPa, the structure and cytoplasmic organelles were grossly deformed and large quantities of intracellular material leaked out, while at 500 MPa, the nucleus could no longer be recognized, and a loss of intracellular material was almost complete (Farr, 1990). Changes that are induced in the cell morphology of the microorganisms are reversible at low pressures, but irreversible at higher pressures where microbial death occurs due to permeabilization of the cell membrane. An increase in process temperature above ambient temperature, and to a lesser extent, a decrease below ambient temperature, increases the inactivation rates of microorganisms during high pressure processing. Temperatures in the range 45 to 50C appear to increase the rate of inactivation of pathogens and spoilage microorganisms. Preservation of acid foods (pH ≤ 4.6) is, therefore, the most obvious application of HPP as such. Moreover, pasteurization can be performed even under chilled conditions for heat sensitive products. Low temperature processing can help to retain nutritional quality and functionality of raw materials treated and could allow maintenance of low temperature during post harvest treatment, processing, storage, transportation, and distribution periods of the life cycle of the food system (Knorr, 1995). Bacterial spores are highly pressure resistant, since pressures exceeding 1200 MPa may be needed for their inactivation (Knorr, 1995). The initiation of germination or inhibition of germinated bacterial spores and inactivation of piezo-resistive microorganisms can be achieved in combination with moderate heating or other pretreatments such as ultrasound. Process temperature in the range 90-121C in conjunction with pressures of 500-800 MPa have been used to inactivate spores forming bacteria such as Clostridium botulinum. Thus, sterilization of low-acid foods (pH > 4.6), will most probably rely on a combination of high pressure and other forms of relatively mild treatments. High-pressure application leads to the effective reduction of the activity of food quality related enzymes (oxidases), which ensures high quality and shelf stable products. Sometimes, food constituents offer piezo-resistance to enzymes. Further, high pressure affects only non-covalent bonds (hydrogen, ionic, and hydrophobic bonds), causes unfolding of protein chains, and has little effect on chemical constituents associated with desirable food qualities such as flavor, color, or nutritional content. Thus, in contrast to thermal processing, the application of high-pressure causes negligible impairment of nutritional values, taste, color flavor, or vitamin content (Hayashi, 1990). Small molecules such as amino acids, vitamins, and flavor compounds remain unaffected by high pressure, while the structure of the large molecules such as proteins, enzymes, polysaccharides, and nucleic acid may be altered (Balci and Wilbey, 1999). High pressure reduces the rate of browning reaction (Maillard reaction). It consists of two reactions, condensation reaction of amino compounds with carbonyl compounds, and successive browning reactions including metanoidin formation and polymerization processes. The condensation reaction shows no acceleration by high pressure (5-50 MPa at 50C), because it suppresses the generation of stable free radicals derived from melanoidin, which are responsible for the browning reaction (Tamaoka, Itoh, and Hayashi, 1991). Gels induced by high pressure are found to be more glossy and transparent because of rearrangement of water molecules surrounding amino acid residues in a denatured state (Okamoto, Kawamura, and Hayashi, 1990). The capability and limitations of HPP have been extensively reviewed (Thakur and Nelson, 1998; Smelt, 1998;Cheftal, 1995; Knorr, 1995; Fair, 1990; Tiwari, Jayas, and Holley, 1999; Cheftel, Levy, and Dumay, 2000; Messens, Van Camp, and Huyghebaert, 1997; Ontero and Sanz, 2000; Hugas, Garriga, and Monfort, 2002; Lakshmanan, Piggott,and Paterson, 2003; Balasubramaniam, 2003; Matser, Krebbers, Berg, and Bartels, 2004; Hogan, Kelly, and Sun, 2005; Mor-Mur and Yuste, 2005). Many of the early reviews primarily focused on the microbial efficacy of high-pressure processing. This review comprehensively covers the different types of products processed by highpressure technology alone or in combination with the other processes. It also discusses the effect of high pressure on food constituents such as enzymes and proteins. The applications of this technology in fruits and vegetable, dairy and animal product processing industries are covered. The effects of combining high- pressure treatment with other processing methods such as gamma- irradiation, alternating current, ultrasound, carbon dioxide, and anti microbial peptides have also been described. Special emphasis has been given to opportunities and challenges in high pressure processing of foods, which can potentially be explored and exploited. EFFECT OF HIGH PRESSURE ON ENZYMES AND PROTEINS Enzymes are a special class of proteins in which biological activity arises from active sites, brought together by a three- dimensional configuration of molecule. The changes in active site or protein denaturation can lead to loss of activity, or changes the functionality of the enzymes (Tsou, 1986). In addition to conformational changes, enzyme activity can be influenced by pressure-induced decompartmentalization (Butz, Koller, Tauscher, and Wolf, 1994; Gomes and Ledward, 1996). Pressure induced damage of membranes facilitates enzymesubstrate contact. The resulting reaction can either be accelerated or retarded by pressure (Butz, Koller, Tauscher, and Wolf, 1994; Gomes and Ledward, 1996; Morild, 1981). Hendrickx, Ludikhuy ze, Broeck, and Weemaes ( 1998) and Ludikhuyze, Van Loey, and Indrawati et al. (2003) reviewed the combined effect of pressure and temperature on enzymes related to the ity of fruits and vegetables, which comprises of kinetic information as well as process engineering aspects. Pectin methylesterase (PME) is an enzyme, which normally tends to lower the viscosity of fruits products and adversely affect their texture. Hence, its inactivation is a prerequisite for the preservation of such products. Commercially, fruit products containing PME (e.g. orange juice and tomato products) are heat pasteurized to inactivate PME and prolong shelf life. However, heating can deteriorate the sensory and nutritional quality of the products. Basak and Ramaswamy (1996) showed that the inactivation of PME in orange juice was dependent on pressure level, pressure-hold time, pH, and total soluble solids. An instantaneous pressure kill was dependent only on pressure level and a secondary inactivation effect dependent on holding time at each pressure level. Nienaber and Shellhammer (2001) studied the kinetics of PME inactivation in orange juice over a range of pressures (400-600 MPa) and temperatures (25-5O0C) for various process holding times. PME inactivation followed a firstorder kinetic model, with a residual activity of pressure-resistant enzyme. Calculated D-values ranged from 4.6 to 117.5 min at 600 MPa/50C and 400 MPa/25C, respectively. Pressures in excess of 500 MPa resulted in sufficiently faster inactivation rates for economic viability of the process. Binh, Van Loey, Fachin, Verlent, Indrawati, and Hendrickx (2002a, 2002b) studied the kinetics of inactivation of strawberry PME. The combined effect of pressure and temperature on inactivation kinetics followed a fractional-conversion model. Purified strawberry PME was more stable toward high-pressure treatments than PME from oranges and bananas. Ly-Nguyen, Van Loey, Fachin, Verlent, Hendrickx (2002) showed that the inactivation of the banana PME enzyme during heating at temperature between 65 and 72.5C followed first order kinetics and the effect of pressure treatment of 600-700 MPa at 10C could be described using a fractionalconversion model. Stoforos, Crelier, Robert, and Taoukis (2002) demonstrated that under ambient pressure, tomato PME inactivation rates increased with temperature, and the highest rate was obtained at 75C. The inactivation rates were dramatically reduced as soon as the essing pressure was raised beyond 75C. High inactivation rates were obtained at a pressure higher than 700 MPa. Riahi and Ramaswamy (2003) studied high- pressure inactivation kinetics of PME isolated from a variety of sources and showed that PME from a microbial source was more resistant \to pressure inactivation than from orange peel. Almost a full decimal reduction in activity of commercial PME was achieved at 400 MPa within 20 min. Verlent, Van Loey, Smout, Duvetter, Nguyen, and Hendrickx (2004) indicated that the optimal temperature for tomato pectinmethylesterase was shifted to higher values at elevated pressure compared to atmospheric pressure, creating the possibilities for rheology improvements by the application of high pressure. Castro, Van Loey, Saraiva, Smout, and Hendrickx (2006) accurately described the inactivation of the labile fraction under mild-heat and high-pressure conditions by a fractional conversion model, while a biphasic model was used to estimate the inactivation rate constant of both the fractions at more drastic conditions of temperature/ pressure (10-64C, 0.1-800 MPa). At pressures lower than 300 MPa and temperatures higher than 54C, an antagonistic effect of pressure and temperature was observed. Balogh, Smout, Binh, Van Loey, and Hendrickx (2004) observed the inactivation kinetics of carrot PME to follow first order kinetics over a range of pressure and temperature (650800 MPa, 10-40C). Enzyme stability under heat and pressure was reported to be lower in carrot juice and purified PME preparations than in carrots. The presence of pectinesterase (PE) reduces the quality of citrus juices by destabilization of clouds. Generally, the inactivation of the enzyme is accomplished by heat, resulting in a loss of fresh fruit flavor in the juice. High pressure processing can be used to bypass the use of extreme heat for the processing of fruit juices. Goodner, Braddock, and Parish (1998) showed that the higher pressures (>600 MPa) caused instantaneous inactivation of the heat labile form of the enzyme but did not inactivate the heat stable form of PE in case of orange and grapefruit juices. PE activity was totally lost in orange juice, whereas complete inactivation was not possible in case of grapefruit juices. Orange juice pressurized at 700 MPa for l min had no cloud loss for more than 50 days. Broeck, Ludikhuyze, Van Loey, and Hendrickx (2000) studied the combined pressure-temperature inactivation of the labile fraction of orange PE over a range of pressure (0.1 to 900 MPa) and temperature (15 to 65C). Pressure and temperature dependence of the inactivation rate constants of the labile fraction was quantified using the well- known Eyring and Arrhenius relations. The stable fraction was inactivated at a temperature higher than 75C. Acidification (pH 3.7) enhanced the thermal inactivation of the stable fraction, whereas the addition of Ca^sup ++^ ions (IM) suppressed inactivation. At elevated pressure (up to 900 MPa), an antagonistic effect of pressure and temperature on inactivation of the stable fraction was observed. Ly-Nguyen, Van Loey, Smout, Ozean, Fachin, Verlent, Vu- Truong, Duvetter, and Hendrickx (2003) investigated the combined heat and pressure treatments on the inactivation of purified carrot PE, which followed a fractional-conversion model. The thermally stable fraction of the enzyme could not be inactivated. At a lower pressure (<300 MPa) and higher temperature (>50C), an antagonistic effect of pressure and heat was observed. High pressures induced conformational changes in polygalacturonase (PG) causing reduced substrate binding affinity and enzyme inactivation. Eun, Seok, and Wan ( 1999) studied the effect of high-pressure treatment on PG from Chinese cabbage to prevent the softening and spoilage of plant-based foods such as kimchies without compromising quality. PG was inactivated by the application of pressure higher than 200 MPa for l min. Fachin, Van Loey, Indrawati, Ludikhuyze, and Hendrickx (2002) investigated the stability of tomato PG at different temperatures and pressures. The combined pressure temperature inactivation (300-600 MPa/50 -50C) of tomato PG was described by a fractional conversion model, which points to Ist-order inactivation kinetics of a pressure-sensitive enzyme fraction and to the occurrence of a pressure-stable PG fraction. Fachin, Smout, Verlent, Binh, Van Loey, and Hendrickx (2004) indicated that in the combination of pressure-temperature (5- 55C/100-600 MPa), the inactivation of the heat labile portion of purified tomato PG followed first order kinetics. The heat stable fraction of the enzyme showed pressure stability very similar to that of heat labile portion. Peelers, Fachin, Smout, Van Loey, and Hendrickx (2004) demonstrated that effect of high-pressure was identical on heat stable and heat labile fractions of tomato PG. The isoenzyme of PG was detected in thermally treated (140C for 5 min) tomato pieces and tomato juice, whereas, no PG was found in pressure treated tomato juice or pieces. Verlent, Van Loey, Smout, Duvetter, and Hendrickx (2004) investigated the effect of nigh pressure (0.1 and 500 MPa) and temperature (25-80C) on purified tomato PG. At atmospheric pressure, the optimum temperature for enzyme was found to be 55-60C and it decreased with an increase in pressure. The enzyme activity was reported to decrease with an increase in pressure at a constant temperature. Shook, Shellhammer, and Schwartz (2001) studied the ability of high pressure to inactivate lipoxygenase, PE and PG in diced tomatoes. Processing conditions used were 400,600, and 800 MPa for 1, 3, and 5 min at 25 and 45C. The magnitude of the applied pressure had a significant effect in inactivating lipoxygenase and PG, with complete loss of activity occurring at 800 MPa. PE was very resistant to the pressure treatment. Polyphenoloxidase and Pemxidase Polyphenoloxidase (PPO) and peroxidase (POD), the enzymes responsible for color and flavor loss, can be selectively inactivated by a combined treatment of pressure and temperature. Gomes and Ledward (1996) studied the effects of pressure treatment (100-800 MPa for 1-20 min) on commercial PPO enzyme available from mushrooms, potatoes, and apples. Castellari, Matricardi, Arfelli, Rovere, and Amati ( 1997) demonstrated that there was a limited inactivation of grape PPO using pressures between 300 and 600 MPa. At 900 MPa, a low level of PPO activity was apparent. In order to reach complete inactivation, it may be necessary to use high- pressure processing treatments in conjunction with a mild thermal treatment (40-50C). Weemaes, Ludikhuyze, Broeck, and Hendrickx (1998) studied the pressure stabilities of PPO from apple, avocados, grapes, pears, and plums at pH 6-7. These PPO differed in pressure stability. Inactivation of PPO from apple, grape, avocado, and pear at room temperature (25C) became noticeable at approximately 600, 700, 800 and 900 MPa, respectively, and followed first-order kinetics. Plum PPO was not inactivated at room temperature by pressures up to 900 MPa. Rastogi, Eshtiaghi, and Knorr (1999) studied the inactivation effects of high hydrostatic pressure treatment (100-600 MPa) combined with heat treatment (0-60C) on POD and PPO enzyme, in order to develop high pressure-processed red grape juice having stable shelf-life. The studies showed that the lowest POD (55.75%) and PPO (41.86%) activities were found at 60C, with pressure at 600 and 100 MPa, respectively. MacDonald and Schaschke (2000) showed that for PPO, both temperature and pressure individually appeared to have similar effects, whereas the holding time was not significant. On the other hand, in case of POD, temperature as well as interaction between temperature and holding time had the greatest effect on activity. Namkyu, Seunghwan, and Kyung (2002) showed that mushroom PPO was highly pressure stable. Exposure to 600 MPa for 10 min reduced PPO activity by 7%; further exposure had no denaturing effect. Compression for 10 and 20 min up to 800 MPa, reduced activity by 28 and 43%, respectively. Rapeanu, Van Loey, Smout, and Hendrickx (2005) indicated that the thermal and/or high-pressure inactivation of grape PPO followed first order kinetics. A third degree polynomial described the temperature/pressure dependence of the inactivation rate constants. Pressure and temperature were reported to act synergistically, except in the high temperature (≥45C)-low pressure (≥300 MPa) region where an antagonistic effect was observed. Gomes, Sumner, and Ledward (1997) showed that the application of increasing pressures led to a gradual reduction in papain enzyme activity. A decrease in activity of 39% was observed when the enzyme solution was initially activated with phosphate buffer (pH 6.8) and subjected to 800 MPa at ambient temperature for 10 min, while 13% of the original activity
| 213
|
Not Just for Kids
The Hunt for Falling Leaves...
Nature's Color on the Ground
by Mary Catherine Ball
Being a reporter, I am always looking for an adventure. Last week, I found one.
I left work to go on a simple journey, but it turned out to be much more.
First, I crossed a mud-ridden stream. Then, I came face to face with flying creatures, fighting to get near me. I even endured webmakers spinning my hair into a shiny maze.
Where did I go? Into the woods, of course. Why? I wanted to gather some fallen leaves.
My luck was good that day. I was able to spy lots of different kinds of leaves lying on the ground. Some were leaves I had never seen. Some were still green, while others were changing to their autumn colors.
Have you ever hunted for leaves? I wonder if you know the names of five of the trees that live in your neighborhood? I bet the answer is no.
Well, me neither. So I had five of the leaves analyzed. I had found the leaves from an oak, a beech, a sweet gum tree and more.
Now, I invite you to make this journey.
Narrow body with pointy edges
Narrow body with pointy edges
May grow berries
Good for sap & color
3 distinct leafs
May grow nuts
This is your task...
Travel to the deep, dark woods (in the daylight) to find these 5 leaves. Cut out the page and take it with you. Make sure you can match your discovery with mine. Happy leaf-hunting!
Stone Soup October 9 (11:30am)-Enjoy lunch and a show. After you eat peanut butter & jelly, watch Stone Soup, performed by the Lost Caravan. Lunch is at noon; show starts at 12:30. Chesapeake Music Hall, Off Rt. 50 approaching the Bay Bridge: 410/406-0306.
All Aboard October 9 & 10 (2pm)-Chug a chug to Zany Brainy for train fun. Listen to stories and sing railroad songs. Build your own trains. Ages 3+. Zany Brainy, Annap. Harb. Ctr.: 410/266-1447.
Tiny Tots Fall for Nature Tues. Oct 12 (10:30am-noon) Three- to five-year-olds (and their adult) hike into the woods to hear autumn stories, gather leaves and make a craft. Bring a bag lunch to enjoy w/apple cider @ King's Landing Park, Huntingtown. $3 rsvp: 800/735-2258.
Spooky Stories in the Woods Tues. Oct. 12 (10:30-noon)-Hike with a ranger to a clearing in the woods. Listen to autumn stories and drink warm apple cider. Gather leaves to make a craft. Bring a bag lunch. Kings Landing Park, Huntingtown: 410/535-5327.
Musical Minds Wed. Oct. 13 (4-4:45pm)-Music makes the world go round. So sing, listen to stories and play musical instruments. Ages 2-4. Chesapeake Children's Museum, Festival at Riva. $8.50; rsvp: 410/266-0677.
Nature Designs Deadline Oct. 15-Create your favorite nature scene out of clay or on paper to win prizes. Age categories are 3-5, 6-8 and 9-12 years. Place winners win nature books or statues. All art forms accepted. Take your masterpiece to Wild Bird Center, Annapolis Harbour Center: 410/573-0345.
Tiny Tots get in Touch with Mother Nature Sat. Oct 16 (10-10:30am)
Tiny tots (2-3 w/adult) learn nature by touch, feeling the many different
textures rough and soft in the world around us. @ Battle Creek Cypress Swamp,
Prince Frederick: 800/735-2258.
| Issue 40 |
Volume VII Number 40
October 7-13, 1999
New Bay Times
| Homepage |
| Back to Archives |
| 63
|
How do you get HIV?
HIV can be passed on when infected bodily fluid, such as blood or semen, is passed into an uninfected person. Semen is the liquid which is released from a man's penis during sex which carries sperm. It can be infected with HIV or AIDS when someone is HIV positive or is carrying the AIDS virus. This can happen during unprotected sex. For example when two people have sex without using a condom when one partner is already infected, or between drug users who inject and share needles.
It can't be transmitted by things like coughing, sneezing, kissing, sharing a toilet seat, swimming pools, sweat and tears.
| 640
|
By Allison Lichter
Young children who can sit still and focus are more likely to graduate from college than children who are easily distracted, according to a new study from Oregon State University.
Researchers made use of data that spanned 25 years, tracking a group of 430 preschool-age children through age 21. Parents were asked to rate their children on such behaviors as “playing with a single toy for long periods of time,” or “gives up easily” when a task or games gets hard.
“We were interested in identifying early predictors of later success,” said Megan McClelland, early childhood research core director at OSU’s Hallie E. Ford Center for Healthy Children and Families, and a lead author on the study.
“We know that early academic skills predict later academic stills,” she said. “But the ability to pay attention and focus are foundational skills that help kids persist through difficult tasks when they need to.”
She was quick to add that academic skills are critical, and that “aptitude part is part of the equation.”
Still, she said, “There are a lot of very smart people who have difficulty finishing tasks.”
Of course, a lot transpires in a person’s life between age 4 and age 21. While researchers couldn’t control for all variables, McClelland said, they were able to control for some of the most important predictors of academic success: including parents’ education level, the child’s cognitive ability, and childhood vocabulary.
Teachers and parents can help young children develop the qualities of persistence and self-discipline by adding new rules to some familiar games. For example, McClelland suggested playing “Red Light, Green Light,” but changing the rules so that “red” means go, and “green” means stop.
“Kids ask me all the time, ‘Are you trying to trick me?’ I tell them, ‘No, we want you stop and think about what you’re doing first.’”
Another simple activity to help build the concentration muscle is a dance game in which children are told they have to dance slowly when the music is fast, and dance quickly when the music is slow.
“When kids have to control their bodies, it’s more challenging for them,” McClelland said. “They actually have to stop their movements and do something different. It requires a lot of thinking.”
McClelland is also the mother of a 3-year-old son. She acknowledges it takes time to help kids practice ways to focus and get engaged in things they are interested in.
“He will sit there for 15 or 20 minutes doing a Thomas the Train puzzle.” While putting it together, McClelland will sometimes ask ‘How many pieces have we used?’ to build her son’s aptitude for counting.
But she says, “I’m also helping him focus and persist.”
| 693
|
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 1