A collection of articles and opinion that complement the dialog necessary for solutions to problems deemed challenging.
By EarthSky in ASTRONOMY ESSENTIALS | SPACE | May 11, 2018
And how many potentially exploding stars are located within the unsafe distance?
A supernova is a star explosion – destructive on a scale almost beyond human imagining. If our sun exploded as a supernova, the resulting shock wave probably wouldn’t destroy the whole Earth, but the side of Earth facing the sun would boil away. Scientists estimate that the planet as a whole would increase in temperature to roughly 15 times hotter than our normal sun’s surface. What’s more, Earth wouldn’t stay put in orbit. The sudden decrease in the sun’s mass might free the planet to wander off into space. Clearly, the sun’s distance – 8 light-minutes away – isn’t safe. Fortunately, our sun isn’t the sort of star destined to explode as a supernova. But other stars, beyond our solar system, will. What is the closest safe distance? Scientific literature cites 50 to 100 light-years as the closest safe distance between Earth and a supernova.
What would happen if a supernova exploded near Earth? Let’s consider the explosion of a star besides our sun, but still at an unsafe distance. Say, the supernova is 30 light-years away. Dr. Mark Reid, a senior astronomer at the Harvard-Smithsonian Center for Astrophysics, has said:
… were a supernova to go off within about 30 light-years of us, that would lead to major effects on the Earth, possibly mass extinctions. X-rays and more energetic gamma-rays from the supernova could destroy the ozone layer that protects us from solar ultraviolet rays. It also could ionize nitrogen and oxygen in the atmosphere, leading to the formation of large amounts of smog-like nitrous oxide in the atmosphere.
What’s more, if a supernova exploded within 30 light-years, phytoplankton and reef communities would be particularly affected. Such an event would severely deplete the base of the ocean food chain.
Suppose the explosion were slightly more distant. An explosion of a nearby star might leave Earth and its surface and ocean life relatively intact. But any relatively nearby explosion would still shower us with gamma rays and other high-energy radiation. This radiation could cause mutations in earthly life. Also, the radiation from a nearby supernova could change our climate.
No supernova has been known to erupt at this close distance in the known history of humankind. The most recent supernova visible to the eye was Supernova 1987A, in the year 1987. It was approximately 168,000 light-years away.
Before that, the last supernova visible to the eye was was documented by Johannes Kepler in 1604. At about 20,000 light-years, it shone more brightly than any star in the night sky. It was even visible in daylight! But it didn’t cause earthly effects, as far as we know.
How many potential supernovae are located closer to us than 50 to 100 light-years? The answer depends on the kind of supernova.
A Type II supernova is an aging massive star that collapses. There are no stars massive enough to do this located within 50 light-years of Earth.
But there are also Type I supernovae – caused by the collapse of a small faint white dwarf star. These stars are dim and hard to find, so we can’t be sure just how many are around. There are probably a few hundred of these stars within 50 light-years.
The star IK Pegasi B is the nearest known supernova progenitor candidate. It’s part of a binary star system, located about 150 light-years from our sun and solar system.
The main star in the system – IK Pegasi A – is an ordinary main sequence star, not unlike our sun. The potential Type I supernova is the other star – IK Pegasi B – a massive white dwarf that’s extremely small and dense. When the A star begins to evolve into a red giant, it’s expected to grow to a radius where the white dwarf can accrete, or take on, matter from A’s expanded gaseous envelope. When the B star gets massive enough, it might collapse on itself, in the process exploding as a supernova. Read more about the IK Pegasi system from Phil Plait at Bad Astronomy.
What about Betelgeuse? Another star often mentioned in the supernova story is Betelgeuse, one of the brightest stars in our sky, part of the famous constellation Orion. Betelgeuse is a supergiant star. It is intrinsically very brilliant.
Such brilliance comes at a price, however. Betelgeuse is one of the most famous stars in the sky because it’s due to explode someday. Betelgeuse’s enormous energy requires that the fuel be expended quickly (relatively, that is), and in fact Betelgeuse is now near the end of its lifetime. Someday soon (astronomically speaking), it will run out of fuel, collapse under its own weight, and then rebound in a spectacular Type II supernova explosion. When this happens, Betelgeuse will brighten enormously for a few weeks or months, perhaps as bright as the full moon and visible in broad daylight.
When will it happen? Probably not in our lifetimes, but no one really knows. It could be tomorrow or a million years in the future. When it does happen, any beings on Earth will witness a spectacular event in the night sky, but earthly life won’t be harmed. That’s because Betelgeuse is 430 light-years away. Read more about Betelgeuse as a supernova.
How often do supernovae erupt in our galaxy? No one knows. Scientists have speculated that the high-energy radiation from supernovae has already caused mutations in earthly species, maybe even human beings.
One estimate suggests there might be one dangerous supernova event in Earth’s vicinity every 15 million years. Another says that, on average, a supernova explosion occurs within 10 parsecs (33 light-years) of the Earth every 240 million years. So you see we really don’t know. But you can contrast those numbers to the few million years humans are thought to have existed on the planet – and four-and-a-half billion years for the age of Earth itself.
And, if you do that, you’ll see that a supernova is certain to occur near Earth – but probably not in the foreseeable future of humanity.
Bottom line: Scientific literature cites 50 to 100 light-years as the closest safe distance between Earth and a supernova.
THE BEATLES’ REMARKABLE catalog includes just one official live album, and the group’s immense popularity made it unlistenable. The Beatles at the Hollywood Bowl, recorded in 1964 and 1965 but not released until 1977, was always a frustrating listen. Try as you might, you simply cannot hear much music above the fan-belt squeal of 10,000 Beatlemaniacs.
You can’t blame the Fab Four, nor their legendary producer George Martin. Martin did what he could with the three-track tapes, but the limitations of 1970s technology did little to elevate the music above the din. Boosting the high frequencies—the snap of Ringo Starr’s hi-hat, the shimmer and chime of George Harrison’s guitar—only made the racket made by all those fans even louder.
All of which makes the remastered version of Live at the Hollywood Bowl especially impressive. The do-over, which coincided with the August release of Ron Howard’s documentary film Eight Days a Week, squeezes astonishing clarity out of the source tapes. You can finally hear an exceptionally tight band grinding out infectious blues-based rock propelled by a driving beat, wailing guitars, and raspy vocals. This album never sounded so lucid, present, or weighty.
“What became apparent when you compared it to what came out in 1977 is how hard Ringo is hitting the drums,” says Giles Martin, George Martin’s son and the producer of the remastered album. “How hard the band were really digging in. We didn’t really know about that before. You take these layers of natural tape effects away to get to the heart of the performance, and when you get there, you actually hear the dynamics.”
Technological wizardry helped uncover the hidden sonics. But don’t think you can just run out and buy the same software to make your crappy Can bootlegs listenable. There’s no checkbox in ProTools to reverse-engineer a lousy recording. To get a sense of what the team at Abbey Road Studios did, imagine deconstructing a smoothie so you’re left with the strawberries, bananas, and ice in their original forms, just so you can blend them again from scratch.
To do that, James Clarke, a systems analyst at Abbey Road Studios, developed a “demixing” process to separate each instrument and vocal track from the cacophony. He isolated everything Ringo, Harrison, Paul McCartney, and John Lennon played and sang, separated it from the din of the crowd, and effectively created clean tracks to remaster. Fittingly, Clarke’s audio-modeling process used spectrograms—imagery you might associate with ghost-hunting—to bring the spirit of these live performances back to life.
“It doesn’t exist as a software program that is easy to use,” Clarke says. “It’s a lot of Matlab, more like a research tool. There’s no graphical front end where you can just load a piece of audio up, paint a track, and extract the audio. I write manual scripts, which I then put into the engine to process.”
Make This Bird Sing
Before tackling the project, Martin told Clarke to take a crack at a track Martin thought might give the engineer fits. “I challenged him with ‘And Your Bird Can Sing’ on an acoustic guitar, and I knew just by being a mean, mean bastard that separating acoustic guitar and vocals was going to be the biggest challenge for him,” Martin says. “You have a lot of frequency crossover and distortion of signal path that goes on.”
Clarke passed that test. Then came the real challenge: Working with those three-track source tapes from the Hollywood Bowl to create digital models of each instrument, the vocals, and the enraptured crowd. From there, engineers could tweak each track to create the final mix.
Separating the kick drum and bass guitar proved relatively easy, because low frequencies don’t suffer from crossover with crowd noise. But vocals, guitars, snare drums, and cymbals share the same sonic real estate with the banshee wail of the fans. The Beatles’ virtuosity and consistency helped here. The modeling process involves using samples of each instrument to help the software determine what to look for and pull out into its own track. If the recording didn’t have a clean enough version of the track Clarke wanted to isolate, he used session recordings to build those audio fingerprints. “I went back to the studio versions to build the models,” he says. “They’re not as accurate, as there are usually temporal and tuning changes between playing in the studio and playing live, but the Beatles were pretty spot-on between studio and live versions.”
After creating spectrogram models of each instrument, he loaded the files into what he calls his “little controller program.” A few hours later, it gave him a clean track of the instrument he modeled. All of those tracks went to the mixing engineer.
From the start, Martin hoped to make the recording as lifelike and accurate as possible. “I wanted to know what it was like watching the Beatles play live,” he says.
Clarke’s process could breathe new life into other old recordings. He and Martin say a few other bands have asked them about working a little magic on the live shows in their own archives, though they wouldn’t name names.
Liven It Up
The Beatles at the Hollywood Bowl is a live album, and Martin and Clark decided to leave a little crowd noise in, even though Clarke says he achieved “nearly full separation” of the music and the audience. As with Bob Dylan’s 1966 concert at “Royal Albert Hall” and Johnny Cash’s gigs at Folsom and San Quentin prisons, the recording wouldn’t have the same energy without a little cheering and screaming. In the end, the remaster dropped the crowd noise by about 3 decibels. “They could have pushed it a lot further if they wanted to,” Clarke says, “but I think they got it spot on.” After almost 40 years, you can finally hear the Beatles in The Beatles Live at the Hollywood Bowl, and they sound glorious.
The U.S. Department of Energy’s Fermi National Accelerator Laboratory has achieved a significant milestone for proton beam power. On Jan. 24, the laboratory’s flagship particle accelerator delivered a 700-kilowatt proton beam over one hour at an energy of 120 billion electronvolts.
The Main Injector accelerator provides a massive number of protons to create particles called neutrinos, elusive particles that influence how our universe has evolved. Neutrinos are the second-most abundant matter particles in our universe. Trillions pass through us every second without leaving a trace.
Because they are so abundant, neutrinos can influence all kinds of processes, such as the formation of galaxies or supernovae. Neutrinos might also be the key to uncovering why there is more matter than antimatter in our universe. They might be one of the most valuable players in the history of our universe, but they are hard to capture and this makes them difficult to study.
“We push always for higher and higher beam powers at accelerators, and we are lucky our accelerator colleagues live for a challenge,” said Steve Brice, head of Fermilab’s Neutrino Division. “Every neutrino is an opportunity to study our universe further.”
With more beam power, scientists can provide more neutrinos in a given amount of time. At Fermilab, that means more opportunities to study these subtle particles at the lab’s three major neutrino experiments: MicroBooNE, MINERvA and NOvA.
“Neutrino experiments ask for the world, if they can get it. And they should,” said Dave Capista, accelerator scientist at Fermilab. Even higher beam powers will be needed for the future international Deep Underground Neutrino Experiment, to be hosted by Fermilab. DUNE, along with its supporting Long-Baseline Neutrino Facility, is the largest new project being undertaken in particle physics anywhere in the world since the Large Hadron Collider.
“It’s a negotiation process: What is the highest beam power we can reasonably achieve while keeping the machine stable, and how much would that benefit the neutrino researcher compared to what they had before?” said Fermilab accelerator scientist Mary Convery.
“This step-by-step journey was a technical challenge and also tested our understanding of the physics of high-intensity beams,” said Fermilab Chief Accelerator Officer Sergei Nagaitsev. “But by reaching this ambitious goal, we show how great the team of physicists, engineers, technicians and everyone else involved is.” The 700-kilowatt beam power was the goal declared for 2017 for Fermilab’s accelerator-based experimental program.
Particle accelerators are complex machines with many different parts that change and influence the particle beam constantly. One challenge with high-intensity beams is that they are relatively large and hard to handle. Particles in accelerators travel in groups referred to as bunches.
Roughly one hundred billion protons are in one bunch, and they need their space. The beam pipes – through which particles travel inside the accelerator – need to be big enough for the bunches to fit. Otherwise particles will scrape the inner surface of the pipes and get lost in the equipment.
Such losses, as they’re called, need to be controlled, so while working on creating the conditions to generate a high-power beam, scientists also study where particles get lost and how it happens. They perform a number of engineering feats that allow them to catch the wandering particles before they damage something important in the accelerator tunnel.
To generate high-power beams, the scientists and engineers at Fermilab use two accelerators in parallel. The Main Injector is the driver: It accelerates protons and subsequently smashes them into a target to create neutrinos. Even before the protons enter the Main Injector, they are prepared in the Recycler.
The Fermilab accelerator complex can’t create big bunches from the get-go, so scientists create the big bunches by merging two smaller bunches in the Recycler. A small bunch of protons is sent into the Recycler, where it waits until the next small bunch is sent in to join it. Imagine a small herd of cattle, and then acquiring a new herd of the same size. Rather than caring for them separately, you allow the two herds to join each other on the big meadow to form a big herd. Now you can handle them as one herd instead of two.
In this way Fermilab scientists double the number of particles in one bunch. The big bunches then go into the Main Injector for acceleration. This technique to increase the number of protons in each bunch had been used before in the Main Injector, but now the Recycler has been upgraded to be able to handle the process as well.
“The real bonus is having two machines doing the job,” said Ioanis Kourbanis, who led the upgrade effort. “Before we had the Recycler merging the bunches, the Main Injector handled the merging process, and this was time consuming. Now, we can accelerate the already merged bunches in the Main Injector and meanwhile prepare the next group in the Recycler. This is the key to higher beam powers and more neutrinos.”
Fermilab scientists and engineers were able to marry two advantages of the proton acceleration technique to generate the desired truckloads of neutrinos: increase the numbers of protons in each bunch and decrease the delivery time of those proton to create neutrinos.
“Attaining this promised power is an achievement of the whole laboratory,” Nagaitsev said. “It is shared with all who have supported this journey.”
The new heights will open many doors for the experiments, but no one will rest long on their laurels. The journey for high beam power continues, and new plans for even more beam power are already under way.
Published on Nov 28, 2016
New technology has been developed that uses nuclear waste to generate electricity in a nuclear-powered battery. A team of physicists and chemists from the University of Bristol have grown a man-made diamond that, when placed in a radioactive field, is able to generate a small electrical current. The development could solve some of the problems of nuclear waste, clean electricity generation and battery life.
By Gianluca Masi in SPACE | November 2, 2016
Astronomers discovered asteroid 2016 VA on November 1, 2016, just hours before it passed within 0.2 times the moon’s distance of Earth.
The near-Earth asteroid 2016 VA was discovered by the Mt. Lemmon Sky Survey in Arizona (USA) on 1 Nov. 2016 and announced later the same day by the Minor Planet Center. The object was going to have a very close encounter with the Earth, at 0.2 times the moon’s distance – about 75,000 km [46,000 miles]. At Virtual Telescope Project we grabbed extremely spectacular images and a unique video showing the asteroid eclipsed by the Earth.
The image above is a 60-seconds exposure, remotely taken with “Elena” (PlaneWave 17?+Paramount ME+SBIG STL-6303E robotic unit) available at Virtual Telescope. The robotic mount tracked the extremely fast (570″/minute) apparent motion of the asteroid, so stars are trailing. The asteroid is perfectly tracked: it is the sharp dot in the center, marked with two white segments. At the imaging time, asteroid 2016 VA was at about 200,000 km [124,000 miles] from us and approaching. Its diameter should be around 12 meters or so.
During its fly-by, asteroid 2016 VA was also eclipsed by the Earth’s shadow. We covered the spectacular event, clearly capturing also the penumbra effects.
The movie below is an amazing document showing the eclipse. Each frame comes from a 5-seconds integration.
The eclipse started around 23:23:56 UT and ended about at 23:34:46. To our knowledge, this is the first video ever of a complete eclipse of an asteroid. Some hot pixels are visible on the image. At the eclipse time, the asteroid was moving with an apparent motion of 1500″/minutes and it was at about 120,000 km [75,000 miles] from the Earth, on its approaching route. You can see here a simulation of the eclipse as if you were on the asteroid.
Bottom line: An asteroid called 2016 VA was discovered on November 1, 2016 and passed closest to Earth – within 0.2 times the moon’s distance – a few hours later. Gianluca Masi of the Virtual Telescope Project caught images of the asteroid as it swept by.
In a Roman mosaic from antiquity, a man on a street studies the sundial atop a tall column. The sun alerts him to hurry if he does not want to be late for a dinner invitation.
Sundials were ubiquitous in Mediterranean cultures more than 2,000 years ago. They were the clocks of their day, early tools essential to reckoning the passage of time and its relationship to the larger universe.
The mosaic image is an arresting way station in a new exhibition, ”Time and the Cosmos in Greco-Roman Antiquity,” that opened last week in Manhattan at the Institute for the Study of the Ancient World, an affiliate of New York University. It will continue until April.
The image’s message, the curator Alexander Jones explains in the exhibition catalog, is clearly delivered in a Greek inscription, which reads, “The ninth hour has caught up.” Or further translated by him into roughly modern terms, “It’s 3 p.m. already.” That was the regular dinnertime in those days.
Dr. Jones, the institute’s interim director, is a scholar of the history of exact science in antiquity. He further imagined how some foot-dragging skeptics then probably lamented so many sundials everywhere and the loss of simpler ways, when “days were divided just into morning and afternoon and one guessed how much daylight remained by the length of one’s own shadow without giving much thought to punctuality.”
An even more up-to-date version of the scene, he suggested, would show a man or a woman staring at a wristwatch or, even better, a smartphone, while complaining that our culture “has allowed technology and science to impose a rigid framework of time on our lives.”
Jennifer Y. Chi, the institute’s exhibition director, said: “The recurring sight of people checking the time on their cellphones or responding to a beep alerting them to an upcoming event are only a few modern-day reminders of time’s sway over public and private life. Yet while rapidly changing technology gives timekeeping a contemporary cast, its role in organizing our lives owes a great deal to the ancient Greeks and Romans.”
The exhibition features more than 100 objects on loan from international collections, including a dozen or so sundials. One is a rare Greek specimen from the early 3rd century B.C. The large stone instruments typically belonged to public institutions or wealthy landowners.
A few centuries later, portable sundials were introduced. Think of pocket watches coming in as movable timekeepers in place of the grandfather clock in the hall or on the mantel. They were first mentioned in ancient literature as the pendant for traveling. The earliest surviving one is from the first century A.D.
Six of these small sundials are displayed in the exhibition. These were owned and used mostly as prestige objects by those at the upper echelons of society and by the few people who traveled to faraway latitudes.
A bronze sundial in the center of one gallery is marked for use in 30 localities at latitudes ranging from Egypt to Britain. Few people in antiquity were ever likely to travel that widely.
A small sundial found in the tomb of a Roman physician suggested that it was more than a prestige object. The doctor happened to be accompanied with his medical instruments and pills for eye ailment, as seen in a display. Presumably he needed a timekeeper in dispensing doses. He may have also practiced some ancient medical theories in which astrology prescribed certain hours as good or bad for administering meals and medicine.
Apparent time cycles fascinated people at this time. One means of keeping track of these cycles was the parapegma, a stone slab provided with holes to represent the days along with inscriptions or images to interpret them. Each day, a peg was moved from one hole to the next. The appearances and disappearances of constellations in the night sky yielded patterns that served as signs of predictable weather changes in the solar year of 365 or 366 days. Not to mention when conditions are favorable for planting and reaping. Not to mention good or bad luck would follow.
For many people, astrology was probably the most popular outgrowth of advances in ancient timekeeping. Astrology — not to be confused with modern astronomy — emerged out of elements from Babylonian, Egyptian and Greek science and philosophy in the last two centuries B.C. Because the heavens and the earth were thought to be connected in so many ways, the destinies of nations as well as individuals presumably could be read by someone with expertise in the arrangements of the sun, the moon, the known planets and constellations in the zodiac.
Wealthy people often had their complete horoscopes in writing and zodiacal signs portrayed in ornamental gems, especially if they deemed the cosmic configuration at their conception or birth to be auspicious.
It is said that the young Octavian, the later emperor Augustus, visited an astrologer to have his fortune told. He hesitated at first to disclose the time and date of his birth, lest the prediction turn out to be inauspicious. He finally relented.
Research shows that an emphasis on memorization, rote procedures and speed impairs learning and achievement
By Jo Boaler, Pablo Zoido | SA Mind November 2016 Issue
In December the Program for International Student Assessment (PISA) will announce the latest results from the tests it administers every three years to hundreds of thousands of 15-year-olds around the world. In the last round, the U.S. posted average scores in reading and science but performed well below other developed nations in math, ranking 36 out of 65 countries.
We do not expect this year’s results to be much different. Our nation’s scores have been consistently lackluster. Fortunately, though, the 2012 exam collected a unique set of data on how the world’s students think about math. The insights from that study, combined with important new findings in brain science, reveal a clear strategy to help the U.S. catch up.
The PISA 2012 assessment questioned not only students’ knowledge of mathematics but also their approach to the subject, and their responses reflected three distinct learning styles. Some students relied predominantly on memorization. They indicated that they grasp new topics in math by repeating problems over and over and trying to learn methods “by heart.” Other students tackled new concepts more thoughtfully, saying they tried to relate them to those they already had mastered. A third group followed a so-called self-monitoring approach: they routinely evaluated their own understanding and focused their attention on concepts they had not yet learned.
In every country, the memorizers turned out to be the lowest achievers, and countries with high numbers of them—the U.S. was in the top third—also had the highest proportion of teens doing poorly on the PISA math assessment. Further analysis showed that memorizers were approximately half a year behind students who used relational and self-monitoring strategies. In no country were memorizers in the highest-achieving group, and in some high-achieving economies, the differences between memorizers and other students were substantial. In France and Japan, for example, pupils who combined self-monitoring and relational strategies outscored students using memorization by more than a year’s worth of schooling.
The U.S. actually had more memorizers than South Korea, long thought to be the paradigm of rote learning. Why? Because American schools routinely present mathematics procedurally, as sets of steps to memorize and apply. Many teachers, faced with long lists of content to cover to satisfy state and federal requirements, worry that students do not have enough time to explore math topics in depth. Others simply teach as they were taught. And few have the opportunity to stay current with what research shows about how kids learn math best: as an open, conceptual, inquiry-based subject.
To help change that, we launched a new center at Stanford University in 2014, called Youcubed. Our central mission is to communicate evidence-based practices to teachers, other education professionals, parents and students. To that end, we have devised recommendations that take into consideration how our brains grapple with abstract mathematical concepts. We offer engaging lessons and tasks, along with a wide range of advice, including the importance of encouraging what is known as a growth mindset—offering messages such as “mistakes grow your brain” and “I believe you can learn anything.”
The foundation all math students need is number sense—essentially a feel for numbers, with the agility to use them flexibly and creatively (watch a video explaining number sense here: https://www.youcubed.org/what-is-number-sense/). A child with number sense might tackle 19 × 9 by first working with “friendlier numbers”—say, 20 × 9—and then subtracting 9. Students without number sense could arrive at the answer only by using an algorithm. To build number sense, students need the opportunity to approach numbers in different ways, to see and use numbers visually, and to play around with different strategies for combining them. Unfortunately, most elementary classrooms ask students to memorize times tables and other number facts, often under time pressure, which research shows can seed math anxiety. It can actually hinder the development of number sense.
In 2005 psychologist Margarete Delazer of Medical University of Innsbruck in Austria and her colleagues took functional MRI scans of students learning math facts in two ways: some were encouraged to memorize and others to work those facts out, considering various strategies. The scans revealed that these two approaches involved completely different brain pathways. The study also found that the subjects who did not memorize learned their math facts more securely and were more adept at applying them. Memorizing some mathematics is useful, but the researchers’ conclusions were clear: an automatic command of times tables or other facts should be reached through “understanding of the underlying numerical relations.”
Additional evidence tells us that students gain a deeper understanding of math when they approach it visually—for instance, seeing multiplication facts as rectangular arrays or quadratic functions as growing patterns. When we think about or use symbols and numbers, we use different brain pathways than when we visualize or estimate with numbers. In a 2012 imaging study, psychologist Joonkoo Park, now at the University of Massachusetts Amherst, and his colleagues demonstrated that people who were particularly adept at subtraction—considered conceptually more difficult than addition—tapped more than one brain pathway to solve problems. And a year later Park and psychologist Elizabeth Brannon, both then at Duke University, found that students could boost their math proficiency through training that engaged the approximate number system, a cognitive system that helps us estimate quantities.
Brain research has elucidated another practice that keeps many children from succeeding in math. Most mathematics classrooms in the U.S. equate skill with speed, valuing fast recall and testing even the youngest children against the clock. But studies show that kids manipulate math facts in their working memory—an area of the brain that can go off-line when they experience stress. Timed tests impair working memory in students of all backgrounds and achievement levels, and they contribute to math anxiety, especially among girls. By some estimates, as many as a third of all students, starting as young as age five, suffer from math anxiety.
The irony of the emphasis on speed is that some of our world’s leading mathematicians are not fast at math. Laurent Schwartz—who won math’s highest award, the Fields medal, in 1950—wrote in his autobiography that he was a slow thinker in math, who believed he was “stupid” until he realized that “what is important is to deeply understand things and their relations to each other. This is where intelligence lies. The fact of being quick or slow isn’t really relevant.”
A number of leading mathematicians, such as Conrad Wolfram and Steven Strogatz, have argued strongly that math is misrepresented in most classrooms. Too many slow, deep math thinkers are turned away from the subject early on by timed tests and procedural teaching. But if American classrooms begin to present the subject as one of open, visual, creative inquiry, accompanied by growth-mindset messages, more students will engage with math’s real beauty. PISA scores would rise, and, more important, our society could better tap the unlimited mathematical potential of our children.
This article was originally published with the title “Why Math Education in the U.S. Doesn’t Add Up”
09/01/16 By Shannon Hall
A puzzling mismatch is forcing astronomers to re-think how well they understand the expansion of the universe.
Astronomers think the universe might be expanding faster than expected.
If true, it could reveal an extra wrinkle in our understanding of the universe, says Nobel Laureate Adam Riess of the Space Telescope Science Institute and Johns Hopkins University. That wrinkle might point toward new particles or suggest that the strength of dark energy, the mysterious force accelerating the expansion of the universe, actually changes over time.
The result appears in a study published in The Astrophysical Journal this July, in which Riess’s team measured the current expansion rate of the universe, also known as the Hubble constant, better than ever before.
In theory, determining this expansion is relatively simple, as long as you know the distance to a galaxy and the rate at which it is moving away from us. But distance measurements are tricky in practice and require using objects of known brightness, so-called standard candles, to gauge their distances.
The use of Type Ia supernovae—exploding stars that shine with the same intrinsic luminosity—as standard candles led to the discovery that the universe was accelerating in the first place and earned Riess, as well as Saul Perlmutter and Brian Schmidt, a Nobel Prize in 2011.
The latest measurement builds on that work and indicates that the universe is expanding by 73.2 kilometers per second per megaparsec (a unit that equals 3.3 million light-years). Think about dividing the universe into grids that are each a megaparsec long. Every time you reach a new grid, the universe is expanding 73.2 kilometers per second faster than the grid before.
Although the analysis pegs the Hubble constant to within experimental errors of just 2.4 percent, the latest result doesn’t match the expansion rate predicted from the universe’s trajectory. Here, astronomers measure the expansion rate from the radiation released 380,000 years after the Big Bang and then run that expansion forward in order to calculate what today’s expansion rate should be.
It’s similar to throwing a ball in the air, Riess says. If you understand the state of the ball (how fast it’s traveling and where it is) and the physics (gravity and drag), then you should be able to precisely predict how fast that ball is traveling later on.
“So in this case, instead of a ball, it’s the whole universe, and we think we should be able to predict how fast it’s expanding today,” Riess says. “But the caveat, I would say, is that most of the universe is in a dark form that we don’t understand.”
The rates predicted from measurements made on the early universe with the Planck satellite are 9 percent smaller than the rates measured by Riess’ team—a puzzling mismatch that suggests the universe could be expanding faster than physicists think it should.
David Kaplan, a theorist at Johns Hopkins University who was not involved with the study, is intrigued by the discrepancy because it could be easily explained with the addition of a new theory, or even a slight tweak to a current theory.
“Sometimes there’s a weird discrepancy or signal and you think ‘holy cow, how am I ever going to explain that?’” Kaplan says. “You try to come up with some cockamamie theory. This, on the other hand, is something that lives in a regime where it’s really easy to explain it with new degrees of freedom.”
Kaplan’s favorite explanation is that there’s an undiscovered particle, which would affect the expansion rate in the early universe. “If there are super light particles that haven’t been taken into account yet and they make up some smallish fraction of the universe, it seems that can explain the discrepancy relatively comfortably,” he says.
But others disagree. “We understand so little about dark energy that it’s tempting to point to something there,” says David Spergel, an astronomer from Princeton University who was also not involved in the study. One explanation is that dark energy, the cause of the universe’s accelerating expansion, is growing stronger with time.
“The idea is that if dark energy is constant, clusters of galaxies are moving apart from each other but the clusters of galaxies themselves will remain forever bound,” says Alex Filippenko, an astronomer at the University of California, Berkeley and a co-author on Riess’ paper. But if dark energy is growing in strength over time, then one day—far in the future—even clusters of galaxies will get ripped apart. And the trend doesn’t stop there, he says. Galaxies, clusters of stars, stars, planetary systems, planets, and then even atoms will be torn to shreds one by one.
The implications could—literally—be Earth-shattering. But it’s also possible that one of the two measurements is wrong, so both teams are currently working toward even more precise measurements. The latest discrepancy is also relatively minor compared to past disagreements.
“I’m old enough to remember when I was first a student and went to conferences and people argued over whether the Hubble constant was 50 or 100,” says Spergel. “We’re now in a situation where the low camp is arguing for 67 and the high camp is arguing for 73. So we’ve made progress! And that’s not to belittle this discrepancy. I think it’s really interesting. It could be the signature of new physics.”
Neutrinos are tricky. Although trillions of these harmless, neutral particles pass through us every second, they interact so rarely with matter that, to study them, scientists send a beam of neutrinos to giant detectors. And to be sure they have enough of them, scientists have to start with a very concentrated beam of neutrinos.
To concentrate the beam, an experiment needs a special device called a neutrino horn.
An experiment’s neutrino beam is born from a shower of short-lived particles, created when protons traveling close to the speed of light slam into a target. But that shower doesn’t form a tidy beam itself: That’s where the neutrino horn comes in.
Once the accelerated protons smash into the target to create pions and kaons — the short-lived charged particles that decay into neutrinos — the horn has to catch and focus them by using a magnetic field. The pions and kaons have to be focused immediately, before they decay into neutrinos: Unlike the pions and kaons, neutrinos don’t interact with magnetic fields, which means we can’t focus them directly.
Without the horn, an experiment would lose 95 percent of the neutrinos in its beam. Scientists need to maximize the number of neutrinos in the beam because neutrinos interact so rarely with matter. The more you have, the more opportunities you have to study them.
“You have to have tremendous numbers of neutrinos,” said Jim Hylen, a beam physicist at Fermilab. “You’re always fighting for more and more.”
Also known as magnetic horns, neutrino horns were invented at CERN by the Nobel Prize-winning physicist Simon van der Meer in 1961. A few different labs used neutrino horns over the following years, and Fermilab and J-PARC in Japan are the only major laboratories now hosting experiments with neutrino horns. Fermilab is one of the few places in the world that makes neutrino horns.
“Of the major labs, we currently have the most expertise in horn construction here at Fermilab,” Hylen said.
How they work
The proton beam first strikes the target that sits inside or just upstream of the horn. The powerful proton beam would punch through the aluminum horn if it hit it, but the target, which is made of graphite or beryllium segments, is built to withstand the beam’s full power. When the target is struck by the beam, its temperature jumps by more than 700 degrees Fahrenheit, making the process of keeping the target-horn system cool a challenge involving a water-cooling system and a wind stream.
Once the beam hits the target, the neutrino horn directs resulting particles that come out at wide angles back toward the detector. To do this, it uses magnetic fields, which are created by pulsing a powerful electrical current — about 200,000 amps — along the horn’s surfaces.
“It’s essentially a big magnet that acts as a lens for the particles,” said physicist Bob Zwaska.
The horns come in slightly different shapes, but they generally look on the outside like a metal cylinder sprouting a complicated network of pipes and other supporting equipment. On the inside, an inner conductor leaves a hollow tunnel for the beam to travel through.
Because the current flows in one direction on the inner conductor and the opposite direction on the outer conductor, a magnetic field forms between them. A particle traveling along the center of the beamline will zip through that tunnel, escaping the magnetic field between the conductors and staying true to its course. Any errant particles that angle off into the field between the conductors are kicked back in toward the center.
The horn’s current flows in a way that funnels positively charged particles that decay into neutrinos toward the beam and deflects negatively charged particles that decay into antineutrinos outward. Reversing the current can swap the selection, creating an antimatter beam. Experiments can run either beam and compare the data from the two runs. By studying neutrinos and antineutrinos, scientists try to determine whether neutrinos are responsible for the matter-antimatter asymmetry in the universe. Similarly, experiments can control what range of neutrino energies they target most by tuning the strength of the field or the shape or location of the horn.
Making and running a neutrino horn can be tricky. A horn has to be engineered carefully to keep the current flowing evenly. And the inner conductor has to be as slim as possible to avoid blocking particles. But despite its delicacy, a horn has to handle extreme heat and pressure from the current that threaten to tear it apart.
“It’s like hitting it with a hammer 10 million times a year,” Hylen said.
Because of the various pressures acting on the horn, its design requires extreme attention to detail, down to the specific shape of the washers used. And as Fermilab is entering a precision era of neutrino experiments running at higher beam powers, the need for the horn engineering to be exact has only grown.
“They are structural and electrical at the same time,” Zwaska said. “We go through a huge amount of effort to ensure they are made extremely precisely.”
To help his readers fathom evolution, Charles Darwin asked them to consider their own hands.
“What can be more curious,” he asked, “than that the hand of a man, formed for grasping, that of a mole for digging, the leg of the horse, the paddle of the porpoise, and the wing of the bat, should all be constructed on the same pattern, and should include similar bones, in the same relative positions?”
Darwin had a straightforward explanation: People, moles, horses, porpoises and bats all shared a common ancestor that grew limbs with digits. Its descendants evolved different kinds of limbs adapted for different tasks. But they never lost the anatomical similarities that revealed their kinship.
As a Victorian naturalist, Darwin was limited in the similarities he could find. The most sophisticated equipment he could use for the task was a crude microscope. Today, scientists are carrying on his work with new biological tools. They are uncovering deep similarities that have been overlooked until now.
On Wednesday, a team of researchers at the University of Chicago reported that our hands share a deep evolutionary connection not only to bat wings or horse hooves, but also to fish fins.
The unexpected discovery will help researchers understand how our own ancestors left the water, transforming fins into limbs that they could use to move around on land.
To the naked eye, there is not much similarity between a human hand and the fin of, say, a goldfish. A human hand is at the end of an arm. It has bones that develop from cartilage and contain blood vessels. This type of tissue is called endochondral bone.
A goldfish grows just a tiny cluster of endochondral bones at the base of its fin. The rest of the fin is taken up by thin rays, which are made of an entirely different tissue called dermal bone. Dermal bone does not start out as cartilage and does not contain blood vessels.
These differences have long puzzled scientists. The fossil record shows that we share a common aquatic ancestor with ray-finned fish that lived some 430 million years ago. Four-limbed creatures with spines — known as tetrapods — had evolved by 360 million years ago and went on to colonize dry land.
Read more at http://mobile.nytimes.com/2016/08/18/science/from-fins-into-hands-scientists-discover-a-deep-evolutionary-link.html
By HENRY FOUNTAIN JUNE 28, 2016
But helium is an important gas for science and medicine. Among other things, in liquid form (a few degrees above absolute zero) it is used to keep superconducting electromagnets cold in equipment like M.R.I. machines and the Large Hadron Collider at CERN, the European Organization for Nuclear Research, which uses 265,000 pounds of it to help keep particles in line as they zip around.
Helium’s role in superconductivity and other applications has grown so much that there have been occasional shortages. The gas forms in nature through radioactive decay of uranium and thorium, but exceedingly slowly; in practical terms, all the helium we will ever have already exists. And because it does not react with anything and is light, it can easily escape to the atmosphere.
Until now, it has been discovered only as a byproduct of oil and gas exploration, as the natural gas in some reservoirs contains a small but commercially valuable proportion of helium. (The first detection of helium in a gas field occurred in the early 1900s when scientists analyzed natural gas from a well in Dexter, Kan., that had a peculiar property: It would not burn.)
But now scientists have figured out a way to explore specifically for helium. Using their techniques, they say, they have found a significant reserve of the gas in Tanzania that could help ease concerns about supplies.
“We’re essentially replicating the strategy for exploring for oil and gas for helium,” said Jonathan Gluyas, a professor of geoenergy at the Durham University in England. One of his graduate students, Diveena Danabalan, presented research on the subject on Tuesday in Yokohama, Japan, at the Goldschmidt Conference, a gathering of geochemists.
One key to developing the technique, Dr. Gluyas said, is understanding how helium is released from the rock in which it forms. Ordinarily, a helium atom stays within the rock’s crystal lattice. “You need a heating event to kick it out,” he said. Volcanoes or other regions of magma in the earth can be enough to release the gas, he said.
Once released, the helium has to be trapped by underground formations — generally the same kind of formations that can trap natural gas, and that can be found using the same kind of seismic studies that are undertaken for oil and gas exploration. The helium, which is mixed with other gases, can be recovered the same way natural gas is: by drilling a well.
Working with scientists from the University of Oxford and a small Norwegian start-up company called Helium One, the researchers prospected in a part of Tanzania where studies from the 1960s suggested helium might be seeping from the ground. The area is within the East African Rift, a region where one of Earth’s tectonic plates is splitting. The rifting has created many volcanoes.
Dr. Gluyas said the gas discovered in Tanzania may be as much as 10 percent helium, a huge proportion compared with most other sources. The researchers say the reservoir might contain as much as 54 billion cubic feet of the gas, or more than twice the amount currently in the Federal Helium Reserve, near Amarillo, Tex., which supplies about 40 percent of the helium used in the United States and is being drawn down.
The next step would be for Helium One or one of the major helium suppliers around the world to exploit the find. But for Dr. Gluyas, the research opens up the possibility of finding the gas in new places.
“We’re in the position where we could map the whole world and say these are the sorts of areas where you’d find high helium,” he said.
You might also enjoy: http://www.wired.com/2016/06/dire-helium-shortage-vastly-inflated/