Category Archives: Science

Marswhelmed

So, the Mars Science Laboratory “Curiosity” has discovered evidence that, about three billion years ago, the environment on the planet Mars could have supported Earth-like microbial life. Some news outlets (including the MSL Twitter feed) are billing this discovery as the accomplishment of Curiosity’s mission.

I have a confession to make.

I don’t really find this discovery all that exciting.

The MSL team’s discovery is a confirmation of a long-expected hypothesis. (Indeed, with the number of planetary environments out there, it would be statistically silly to think that Earth is the only life-supporting place!) It’s valuable to know, and it’s important to the scientific method to rack up such confirmations even when we’re as sure as we can be, but it doesn’t exactly have the same allure as striking out into the unknown. I think the spirit of exploration is important to maintain in our space programs, because brand-new missions and discoveries are what keeps space exploration in the public eye. After all, a recent study shows that not only do most Americans want to see exploring Mars as a national priority, but most Americans want to see a human mission to Mars and three-quarters of Americans want to see the NASA budget doubled. I am confident that the dramatic landing of the Curiosity rover, with its brand-new mission architecture, has something to do with that enthusiasm.

There’s also something I find slightly foreboding about Curiosity’s confirmation. In 2011, the National Research Council’s Planetary Sciences Decal Survey of Solar System exploration listed and prioritized the objectives of our planetary science program for 2013 through 2022. This is a study done every ten years to identify which of the flagship-sized missions NASA should fund, design, and launch in the coming decade. First on the list for 2013-2022: a mission to return samples of Martian rock and soil to Earth. The announced “Mars 2020” rover is in line with that objective.

I’m going to go out on a limb and predict the conclusion sentence of scientific findings from a Mars sample return mission:

Chemicals and minerals present on the surface of Mars indicate that ancient Mars may have included wet environments able to support Earth-like microbial life.

In other words, I don’t think a Mars sample return mission will give us any dramatically new information that we didn’t already have from MSL, MER, MRO, or any of the Martian samples we already have. See what’s got me worried? I don’t think we’re going to actually discover life – in fact, I would be very surprised if the 2020 rover included any instruments actually capable of recognizing a Martian if it walked right up, poked the rover with a Martian stick, and walked away. (Curiosity doesn’t!) I am afraid that we will put this rover on the Red Planet in 2020, cache a sample, retrieve the sample in 2030, and the public response will be, “wait a minute, we spent two decades confirming what we already knew in 2013? Come on, space program…where’s my jetpack?”

A Mars sample return mission would be a triumph…for the niche sub-field of Martian geochemistry. I don’t think it would have the sort of broad scientific and public impact that we should expect from a flagship-scale mission. Basic research science plods along, making incremental improvements in understanding and slow-but-steady progress. NASA should be sticking its neck out, thinking big, and going for the most challenging – and rewarding – missions. Instead of looking for environments that might have been habitable three billion years ago, we should be looking for actual life.

You see, even before MSL’s discovery, we already knew of the existence of a watery, potentially life-supporting environment. Jupiter’s moon Europa has an icy crust with a subsurface water ocean beneath. The ocean is warm enough to be liquid, because of the energy input from Jupiter’s tides. And scientists have found that that ocean contains lots of salts and minerals – and even organic (carbon-containing) compounds. Liquid water, energy sources, and chemical building blocks: everything an Earth-like life form needs! The main difference between Europa and Mars is that, while we’ve been able to observe the desolation of the Martian surface for decades and know that we could only expect to find evidence of ancient microbes, we have no idea what’s under the Europan ice sheet. It could be nothing…but it could also be life as rich and complex as what we find, on Earth, under Antarctic ice, in sealed cave systems, or around hydrothermal vents. Unlike Mars, where we have been forming preliminary conclusions for years, we won’t know until we get something under that ice layer. That’s the kind of exciting exploration work that I want to see from my NASA flagship missions.

The Decadal Survey did recognize the potential for alien life on Europa. Its executive summary says that “the second highest priority Flagship mission for the decade 2013-2022 is the Jupiter Europa Orbiter” but notes that “that both a decrease in mission scope and an increase in NASA’s planetary budget are necessary” to fly a mission to Europa. Personally, I’d prefer to discover alien creatures within my lifetime…but I don’t make policy or control the purse-strings. So, instead, off to Mars we’ll go again.

Original Fiction: “Conference” (final draft)

I had been trying to sell this story for a while now, but was not successful. There’s a bit of a catch-22 to selling a short story for the first time: without any feedback from editors and readers, there is no way for me to tell whether a rejection was because the story didn’t align with a publication’s interest at the time, or whether they didn’t think the story was very good. (And if it wasn’t very good…what it did wrong.)

This makes me sad, because I got lots of positive feedback from people who went to graduate school in a technical field. I think that maybe that’s the problem: the story appeals to too much of a niche crowd.

Anyway, here it is, the version of the story I most recently tried to sell. It’s about a young scientist presenting her findings at a research conference, and the unexpected reception she encounters there. It was inspired by some of my own experiences in grad school.

Conference

The numbers didn’t match up. Ceren Aydomi tapped her desk, frowning at the resonance spectra before her. The projections cast pale purple and green light over Ceren’s face, spilling down the front of her body and glinting from the polished glass surface of her desk. The peaks of each spectrum marched onward, rapidly deviating from her calculations. And the Three Hundred Seventy-Eighth Channel Interstice Studies Meeting was only two days away. Continue reading Original Fiction: “Conference” (final draft)

What’s the deal with Lagrange Points?

You may have read about rumors that NASA is considering building a space station at a place called the “Earth-Moon L2 Point.” The “L” is short for “Lagrange,” and this is one of the places in space known as a “Lagrange Point.” Unless you’re familiar with the basics of orbit mechanics, you may be wondering – who the heck is Lagrange, and why does he have points in space? More to the point (ha!), why is NASA interested in building a space station there?

To explain what a Lagrange Point is, I’m going to take you through a couple analogies.

Imagine you are standing on the top of a perfectly rounded, symmetric hill.

Right where you are, the ground is flat and level. You don’t feel any forces moving you one way or the other: you are in equilibrium. But if you take a step in any direction, the ground begins to slope and a force pulls you out further away from the top of the hill. The magnitude of this force is your weight, times a factor that accounts for the angle of slope:

The force always pulls you out from the center of the hill. Let’s call this direction r, for “radial.” There’s another direction on the hill, the “circumferential” direction c – this is a direction that always takes you walking around the hill in a circle. There is no component of force pulling you in this direction.

From physics classes, you are probably familiar with the idea of potential energy. Potential energy is a quality associated with points in space, and we express that quality with a single number measured in joules. Where I am sitting, space might have a potential energy of six joules, and where you are standing space might have a potential energy of ten joules. This energy comes from sources like gravity or magnetism. The difference in potential energy between two points tells us how much work it takes to move something from one point to the other: if I want to visit you, I need to spend four joules of energy. If you visit me, you actually get four joules out of the deal, which you can spend on something else (such as moving faster).

Potential energy has a direct connection to force. If you are in a place which has a high potential energy, and nearby is a place with low potential energy, you will feel a force pushing you towards the lower-energy spot. Mathematically, we say that the force is equal to the gradient of the potential energy. So, on this hill, the top of the hill has the most potential energy (we’ll call it zero, though) and there is less and less energy as we move off in the +r direction. In the c direction, the potential energy is always the same, depending on your current position in the r direction. If you go up the hill, in the direction –r, you will go towards higher potential energy and the force of your weight will work against you. You could imagine making a topographic map of the hill, only instead of the contour representing different heights, they represent different potential energy levels.

If you were to let yourself go and slide down the hill, your total energy would be about constant. You may recall the definition of kinetic energy: mv2/2, where m is your mass and v is your speed. The sum of potential and kinetic energy must stay the same, so as you roll and your potential energy drops, your kinetic energy (therefore, your speed) will rise. The equation for your total energy will have the pieces from both, though: E = U + K = –mgrsin(theta) + mv2/2. Notice that in this equation we have one term that depends on our position in space and on term that depends on our speed.

Got the hill down? Great. Now I’m going to stick you someplace else!

Suppose we go and find a merry-go-round, and we convince the operator to clear out all the horses and chariots and stuff so that it’s just a big, flat, rotating disk. Let’s also put a curtain around the outer edge, so that we can’t see outside from within. Then, you go and stand in the very center and we slowly start rotating the disk until it reaches a constant, slow angular velocity.

In the exact middle of the disk, you will notice very little. However, take one step away – in the +r direction – and you will begin to feel a centrifugal force: a force that you perceive as pulling you outwards. But thinking about forces in rotating reference frames is hard, and we’ll immediately get into pedantic debates about which forces exist and which don’t. So let’s think about energy again!

The disk is flat, so every point on it will have the same gravitational potential energy – it doesn’t really matter what that energy value actually is, since it’s differences in potential energy that are valuable, so let’s call it all zero. As you walk in the +r direction, you will have kinetic energy from two sources. First, there is your own motion, which contributes energy mv2/2. Second, there is the energy you have from spinning in a circle, because your feet are on the disk. If the spin rate w of the disk is slow enough, you might not notice it, but it’s there – and the energy is mr2w2.

Your total energy, therefore, is U + Kmr2w2mv2/2. Now–

Huh. Wait a second. That equation has a term that depends on your position in space and a term that depends on your speed. The piece that depends on speed is exactly what we had on the hill, too!

The piece that depends on position doesn’t quite look the same as it did on the hill. However, it has one very similar property: when you move out in +r, the value of that term changes. And, therefore, the magnitude of your speed v must change to keep the total energy constant! We can debate about whether centrifugal or centripetal forces are real, but effectively, the equation for your total energy behaves in the same kind of way on the spinning disk as it does on a hill. Effectively, your entire kinetic energy trades back and forth between the “translational” mv2/2 part and the “rotational” mr2w2 part, just as on the hill it traded between K and U.

So let’s call mr2w2 your “effective potential energy” on the disk! It behaves just like any other kind of potential energy – gravitational, magnetic, chemical, whatever – would, because it is energy that depends only on your position in space, even though it’s actually kinetic energy. We could even make a contour map of the effective potential energy.

Okay, then. Lagrange points, right?

Imagine the Earth and the Moon, sitting in space near each other. Don’t worry about orbital motions yet – just pretend that the two are fixed. Each body has a gravitational field, which we can visualize by a contour map of potential energy levels: far away from both the Earth and Moon, an object would fall generally inwards toward them, with potential energy decreasing as it goes in. The closer an object gets to either body, the stronger the gravitational pull, so the contour lines must be spaced closer together. And somewhere in the middle, the gravitational force of the Earth and Moon will balance each other exactly, so there is a level spot in the potential energy map.

This isn’t the whole story about bodies in space, though, because the Earth and the Moon aren’t fixed. They orbit around each other. An object we place near the Earth and Moon will also orbit around them. And because of that orbital motion, the energy of the object must include a component from rotation – which we can incorporate into the effective potential energy map around the Earth and Moon. Picture sitting in a spaceship somewhere “above” the Earth-Moon system that rotates at the same rate as the Moon orbits the Earth, such that from your perspective the Earth and Moon appear fixed in space. Then the effective potential energy map must have a component accounting for that rotation, just like on the merry-go-round. The map will look something like this:

Notice that there are five places on the map where the “topography” is locally flat – meaning that there is no net force acting on an object there. Between the Moon’s gravity, the Earth’s gravity, and the objects’ own orbital rotation, objects in those locations are at equilibrium!

These are the Lagrange points, and this is what makes them special: place a satellite at a Lagrange point, and it will stay there.

The reason why these points are attractive places to put a space station is because it’s easier to get to Lagrange points from the Earth’s surface than it is to go all the way to the Moon – and vice-versa.  In terms of our effective potential energy map, we have to cross fewer contour lines to get from the Earth to, say, L2 than we do to get to the Lunar surface. Every time we want to cross a contour line, we gave to make our spaceship gain or lose kinetic energy, and that means firing the engine – so crossing fewer contour lines directly corresponds to using less propellant or power.

If NASA located a space station at L2, then it could launch crews to the station with a smaller rocket than it would need to put the same crew on the Moon. NASA could also launch exploration vehicles and extra fuel to the station, so that the crew could eventually shuttle from the Earth to the station, and then take the station-to-Moon express from that point, at their leisure.

So: The reason why a station at L2 is exciting is not that L2 is an especially exciting place, but that the station would be part of a larger space exploration architecture. Not just flags and footprints, but more stations and vehicles and astronauts!

 

Quantitative Revolution

We’re going through an interesting sort of revolution in America. One after another, various disciplines are realizing (or, it’s coming out publicly that they have realized) that math is useful for stuff.

Wherever there is data available, a scientific, quantitative approach allows people to do two things. First, they can use existing data to develop a model which fits all the available observations. Next, they can in turn use the model to predict future behavior. And if people can make predictions, they can try to make decisions. Influence outcomes. Optimize certain results.

An obvious place for such an approach is the world of high finance, a discipline which is totally steeped in numbers and data – and completely focused on the very quantitative problem of maximizing a return and minimizing loss – but for a long time apparently ignored statistical modeling. People successfully applied statistical analysis, and ended up doing very well…but there was a backlash. Here’s an interview where a reporter complains that trying to optimize stock market gains somehow mis-values the stock market, at least according to his conception of value.

Geez. Those…those…physicists. They use models based on data of past performance, then try and predict future performance…and worst of all, they keep getting their predictions right!

(I want to note that if someone has a problem with the idea that these “quants” have privatized tremendous gains and socialized tremendous losses, that’s not a problem with their approach. It’s an issue with the goals of their models, and whether those goals are morally justified is a separate question from whether the approach works to satisfy the goals.)

We also have a ton of data available in the world of professional sports. Commentators make it their business to know – and inform viewers – whether or not this is the guy who gets on base with a ground-rule double on an overcast Tuesday more than any other player with an odd jersey number when the pitcher throws a 96-mile-an-hour fastball. In fact, this revolution I’m referring to might even be called the Moneyball effect. After all, that movie brought this idea forward in the popular consciousness.

Most recently – and certainly most dramatically – we have people who build statistical models on political poll data. Despite a constant media barrage insisting that the 2012 election was a dead-heat horse-race fifty-fifty hyphenated-adjective toss-up, these poll wonks stubbornly viewed their data scientifically, constructed careful algorithmic models, and predicted a much more certain, though far less entertaining, outcome. There was quite a backlash against these predictive models, at first, though the backlash seems to have been driven by either ideological preconceptions or a misunderstanding of the statistics: a poll showing two candidates with a 51-49% split doesn’t mean that the likelihood of each candidate winning is 51% or 49%. In true Hari Seldon-like fashion, the models aren’t predicting what single voters do or making decisions for us; but with an aggregate of people, they can make astonishingly good predictions. In many ways, this was the biggest story to come out of the 2012 American elections: scientific thinking and mathematical methods actually work!

This notion seems revolutionary, in each field it has touched so far. That appearance is what I find most surprising! Science has given humanity an entire body of knowledge. We can predict the behavior of quantum particles. We can determine whether there are planets orbiting other stars. We can forecast snowfall to within a few inches of accuracy a week in advance. We can find out what the feathers on a dinosaur look like. We can reconstruct Pangaea in a computer. And all the predictive mathematical models that allow scientists to do those things also give us cell phones, Angry Birds, medications, contact lenses, and all sorts of other goodies. Science isn’t just something that happens in isolated labs – it gets out into the world. And quantitative thinking isn’t magical wizardry – it is a tool that anyone with the will to apply themselves can learn.

This is a lesson that I hope we take to heart.

The Most Important Issue

I’ve seen some political surveys recently that ask respondents to pick the most important issue to them from a predefined list, and I’ve never had any of these lists include what I think is the most important issue facing our country right now. This is probably because it’s hard to condense my issue into a pithy phrase. Generally, I would go for a choice such as “science and technology policy” or “research, innovation, and education,” but items like those almost never appear in the poll options.

We live in a fast-moving world, and I am concerned about the United States’ ability to keep up. Perennial stories crop up in the news of how US students’ test scores are falling in science and math, how high technology is moving to India and China, how other countries are committing increasing resources to clean energy, space stations, or Moon probes. Companies in the US are much more focused on next-quarter profits than they are on research and development. Congressmembers routinely attack the National Science Foundation and National Institute of Health for wasting taxpayer money by spending it on basic research. In such a climate, I am worried about whether, in the next decade or two, the US will cede global leadership to other countries. The problem isn’t just money, but also the level of public awareness, understanding, and engagement of the work coming out of places like the NSF and NASA.

This is not just an idealistic policy issue – it’s also an education issue, economic issue, and national security issue. Do we want to create high-paying, rewarding jobs? We can do so by investing in high-tech infrastructure. Do we want American companies to innovate? We need to make sure they have incentives for longer-term R&D. Do we want our transportation systems to be safe from terrorist threats? Then we need intensive research on efficient and sensible ways to identify concealed weapons. Do we want true energy security for the long haul? Then we need to pursue technological solutions for renewable or clean energy sources. Do we want our military to remain effective and safe? Then we need to give our soldiers, sailors, and airmen the latest technologies. Do we want our children to be able to compete in the global marketplace when they grow up and start looking for work? We need to equip them with the best tools we can. And do we want our policymakers to make informed and well-considered decisions about all these issues? Then we need to make sure they are well-educated about science and technology, too!

I want candidates for office to advocate enhanced support for the NSF, NIH, Department of Energy, and NASA. I want them to stand for infrastructure investments. I want them to speak highly of science and engineering scholarship or fellowship programs. I want them to care about basic research. I want them to commit federal dollars to programs that clearly enhance our capabilities and quality of life, but corporations won’t pursue because of their myopic short-term goals. I want them to openly consult the smartest people they can find when considering these issues.

That’s what I think is the most important issue in America. Science and technology policy. Science and math education. High-tech infrastructure. Secure energy. The value of intelligence and critical thinking. In short: the future. Continue reading The Most Important Issue

Stoking our Curiosity for Other Worlds

In the wee hours of last Monday (Eastern US time), a jubilant Mission Control erupted at the successful landing of the Mars Science Laboratory “Curiosity.”

Curiosity has demonstrated some amazing technological feats. Now, that portion of its mission is nearly over, and the rover will go over to science operations. The hair-raising, fist-pumping, frenzy-whipping part is done – but it’s been great practice!

While the MSL entry, descent, and landing system may seem harebrained and silly, it is in fact quite conservative and driven by fundamental engineering decisions. The engineering triumph of this system demonstrates to me how spacecraft engineers can set extraordinarily technically ambitious goals and achieve them in dramatic fashion. The JPL engineers who devised it are the types of people who design a device to last for three months and find it still happily ticking away six or eight or more years later. This thing was going to work. The toughest part was probably selling the concept to the NASA brass!

So, now we’ve got reinforcing knowledge that we can aim for the stars and hit them (well, planets, anyway). Let’s set out with some crazy-ambitious goals! And let’s set out for some places that let us answer fundamental questions.

This is my core disagreement with the NASA Decadal Survey, which prioritizes a Martian sample return mission above all else: such a sample return will advance the sub-sub-field of Martian geochemistry an incremental amount. This is not an ambitious enough goal to meet our demonstrated engineering capability! I don’t want to discover evidence that some place may have been habitable sometime in the distant past – I want to go someplace where we discover life because it’s staring right back at us.

Not so long ago, I proposed a mission concept for a subsurface probe to Jupiter’s ice moon Europa. Europa is intriguing because we already know that it has liquid water, and we already know that it has a strong energy source from Jovian tides – both of which are key ingredients for life as we know it! Even better, there are certain surface features on Europa, which – if our best models for how those features form are correct – are conduits from outer space to the ocean beneath. I suggested that we might develop a space vehicle that conducts a high-wire act above one of these exposed ice fractures, dropping probes down into the ocean below.

Soft landing on a Europan double ridge

Continue reading Stoking our Curiosity for Other Worlds

Global Physics Department

Yesterday I was invited to give a presentation to the Global Physics Department, and online group of college and high school physics educators moderated by Prof. Andy Rundquist from Hamline University. The group gathers to hear virtual speakers on math, physics, science, and education on a weekly basis. Andy found my blog (hi!) and asked me to work up a presentation on science and its presence (or absence) in science fiction. You can see the recording here. (There are lots of other interesting presentations on the site, too.)

I spent a while thinking about the approach I wanted to take with this presentation. Of course, the easiest thing to do would have been to pick some choice examples from science fiction and pick them apart, criticizing the presence of sound in space or starships that move like boats and airplanes. I did a little of that, but I also wanted to bring up some other approaches that might encourage students to explore the intersections of science and science fiction, including looking at some of the things that science fiction gets mostly “right,” examining what it would take to give us science-fiction gadgetry using current knowledge, and trying to extrapolate realistic scenarios using scratch paper and our imaginations.

All in all, I think it was a fun evening – but I barely scratched the surface! My only “disappointment” was that it would have been fantastic to really open things up for discussion at the end. But with a topic so rich, it’s hard not to run into the time limit!

Antitechnocracy

A reporter from This American Life did something interesting for today’s broadcast: she brought together a ninth-grade global warming skeptic and the executive director of the National Earth Science Teachers Association together in the show studio for a discussion. (Audio available here.) The dialogue was reasoned and civil. In quick summary: the scientist presented the skeptic with the best evidence available and went through the logical arguments, from temperature/CO2 correlations to ice core measurements. The skeptic then asked, “well, what about the following things?” – and presented some common climate-change-skeptic arguments (for example, why has there been higher snowfall in recent years in some places). The scientist went through each, point by point, and explained the science behind each and whether or not that science was relevant to the overall climate picture (for example, warmer temperatures allow the atmosphere to hold more water vapor, giving the higher snowfall – and, besides, our day-to-day weather experience is separable from the trend of the climate).

At the end, the reporter asked the skeptic how convincing the evidence was. Did she buy it? In short: no. She said that she could see how the scientist’s explanations could account for all the data, but… The ninth-grader then said something very astute here. This is a similar situation to the debates between scientists and educators and creationists. You have some people who can be convinced, and some who accept the theory, but then there are also some people who won’t buy the scientific results no matter what. In other words, when we want to believe something, we tend to believe it. Regardless of evidence.

Next, the reporter asked the ninth-grader if the scientist could do something to sway her opinion, and what that would be. The ninth-grader thought for a moment, and decided that if she just had all the arguments from both sides laid out in front of her, and she got to make her own decision, then she would be more likely to accept the scientific consensus.

I have mixed feelings about that conclusion. On the one hand, I would like to laud this ninth-grader for her desire to weigh all the evidence and arguments and make an informed decision. (I definitely want to laud her for her presence and attitude on the radio. She was quite reasonable and did a great job expressing herself.) But, on the other hand, the scientist was right to point out that when we are trying to account for the behavior of the universe, our belief has no bearing on reality. And, if this ninth-grader really wants to make all her decisions and form all her opinions this way…she’s got several lifetimes of study, schooling, and degree programs ahead of her.

I wonder to what extent this sort of attitude is systemic in American society. Politicians and pundits challenge scientific findings on the basis of belief, politics, “common sense,” and “gut feelings.” School board candidates get elected by saying that they will “stand up to the experts.” We are supposed to feel that we live in a free country, that everybody’s opinion is valid, and that anyone can make a decision on any issue. While I think that everyone has (and should have) that potential, I am not comfortable with the recent anti-expertise trend that I think may result from that philosophy.

Let me provide a concrete example: suppose I go to the emergency room because there is something going dramatically wrong with my body. I don’t want to try to suss out a diagnosis using only common sense, and I don’t want a doctor who will base his medical decisions on similarly fuzzy impressions. I want the best doctor. I want an expert doctor. I want a doctor who knows all the details of the human body, how drugs and lab tests and surgical procedures work and interact, and how all that knowledge applies to my situation. Similarly, if I have a legal problem, I want an expert defense lawyer – because, though I have the right to defend myself and I’m decent at expressing my opinions, I know that a competent prosecutor could run circles around me. Heck, if I have a car problem, even though I’m an engineer for a living and I learned all about combustion cycles and the principles of mechanics in my physics classes, I want an expert mechanic to fix my problems. I’m a smart and capable guy, but I don’t have the time or desire to become an expert in all these things – so I rely on other people.

“Common sense” is great for some things, such as solving interpersonal problems. But common sense didn’t get us to the Moon, or win the World Wars, or invent the modern computer, or eradicate smallpox. Expertise did those things, and many more.

In the case of climate change, the expert scientists have long held a consensus conclusion. Most of the arguments denying global warming come from politicians and commentators. If we all were willing to go through the effort of learning the scientific process, learning the techniques and tricks that scientists use to produce their results, combing through and analyzing the data, and weighing our conclusions against other studies, then this debate wouldn’t be happening the way it is. Nor would it be happening so if we accepted the conclusions of those experts who did devote their lives to all that data analysis and research. But it seems that Americans all want to make their own decisions on the matter – that they want to think that their beliefs, rather than data-driven conclusions, describe the way the universe works.

After the data is analyzed, though, there is an important role for common sense to play: determining the policy actions, if any, informed by expert conclusions. If economic conservatives want to accept that climate change is happening, but adopt the position that we should not take any action to prevent it, then I can respect that viewpoint as intellectually honest even if I disagree. But when such people deny climate change entirely, well…I wonder what kinds of doctors they want treating them.

A Universe Full of Worlds

This week has been great for exoplanet news!

Artist's concept of exoplanet systems. Credit: ESO/M. Kornmesser

Ever since the launch of the Kepler space telescope, it seems like extrasolar planet discoveries have been rolling in constantly. But this week at the American Astronomical Society meeting, there were several big announcements.

The first was the discovery of the smallest exoplanetary system yet, containing the smallest planets known. The star in question is a red dwarf, and none of its three (known) planets is larger than the Earth. One of them is about half Earth’s radius – approximately the same size as Mars.

The second announcement was of the discovery of an object orbiting another star that seems to have a vast ring system – larger even than Saturn’s majestic companion rings! Astronomers found the rings when they passed in front of their planet’s star, dimming its light. I think the truly amazing thing about this discovery is not just that our telescopes can detect transits of rings, but that the scientists analyzing this event tracked the variation of sunlight shining through the rings and discovered that these rings, like Saturn’s, have gaps. Gaps in ring systems form when the ring particles get into an orbital resonance with another orbiting body: the second body’s gravitational tugs push the ring particle at just the right frequency to knock it away from that orbital radius, clearing out a gap. Furthermore, computer models indicate that rings around planets are generally unstable – they spread out and disperse. Saturn’s rings, for instance, would not have lasted to be the age that they are – if not for the presence of shepherd moons. My point is this: in order for this extrasolar planet to have rings, especially rings with gaps, it must have moons.

Third, and most exciting in my opinion, there has been a survey of star systems imaged with a gravitational lensing technique, and it concluded that there are more planets in our galaxy than stars. Put another way: on average, every star has at least one planet! Astronomers used to wonder: is the Solar System exceptional in the universe? And, if so, what made it so special? Now, there are more and more indications that planetary systems like ours are not just out there – they’re downright common!

The thing that makes exoplanet research so fascinating to me is the sheer variety of worlds discovered. There are so many stars out there, and so many planets, that it seems almost harder to imagine a world that can’t happen than a world that might. And some of the newly discovered worlds might give George Lucas or Gene Roddenberry a run for their money! Nothing drove this point home to me more than an astronomy lecture I attended a few years ago, in grad school: the speaker talked about M dwarf stars, and how the “habitable zone”* of some of those stars would be at such small orbital radius that a planet in that zone would be tidally locked – orbiting once per day, always pointing one hemisphere towards the star. But, continued the speaker, we have discovered exoplanet orbits with rather high eccentricity – and those worlds would “rock” back and forth around their tidal equilibrium. On those worlds, you could stand on a beach and watch the sun rise over the ocean…then, a few hours later, the sun would reach its zenith, turn around, and sink right back down to set at the same point on the horizon!

Then, a few weeks later, I heard another speaker talking about Gliese 581g – alias “Zarmina” – shortly after its (potential) discovery. This planet, if it truly exists, lies smack-dam in that habitable zone* but would be locked to its star, so one hemisphere is always day and one is always dark. Naturally, many sci-fi fans attached themselves to the idea that only the strip of land near the terminator would be habitable. (io9 even posted a bunch of whimsical concept art from the hypothetical Zarmina Minitry of Tourism.) But in this lecture, I learned that the climate on such a world would likely make it even stranger – rather than being habitable in a twilight band circling the globe, the world would be encased in ice with a liquid sea directly beneath its sun: the astronomer called this “eyeball” Earth. What strange and intriguing cultures might arise on such a world?

And that’s not all. There are more known exoplanets orbiting binary stars, for instance. And some more space missions designed to hunt for – or investigate existing – exoplanets are advancing through the design process. Who knows what we will find in the future?

Chances are, if you can imagine it arising from the physics we know, it does exist out there. Now the questions become: how can we explore these places? And how many other explorers are out there, looking back at us?

* I find the term “habitable zone” bothersome, because we have coined the term based on a single data point. However, the alternative “liquid-water zone” is misleading, because we know that there is liquid water in our outer Solar System. (Heck, Europa may even be habitable, we don’t know!) But “liquid-surface-water zone,” which is what astronomers really mean by this term, is just awkward.

Flying to Titan

Decadal surveys and other prioritizations of potential NASA exploration missions often rank one thing very highly: a sample-return mission from Mars. However, I think there are some much more scientifically interesting, technologically challenging, and engaging to the public mission proposals out there. This is one: a Titanian UAV!

The idea is to send an airborne vehicle to Saturn’s moon Titan which would fly around the moon, observing surface features from its high vantage point. A powered flyer, as opposed to a balloon, has the advantage of being able to travel to a specific location: such as the moon’s liquid lakes!

The proposal team uses some clever mission planning approaches to handle the limitations of the aircraft: for example, using glide phases to hoard power for downlink sessions. Their nominal mission duration is one year: a year of exploring another planet from the air, a year of images and science data depicting a world of lakes, rivers, ice, and rain. The full proposal is online here.

I find the idea exciting, and I hope that NASA’s governing councils soon prioritize exploration of those extraterrestrial locations most likely to harbor life – like Europa, Enceladus, and Titan.