All posts by josephshoer

This makes me a *little* happier about the SLS

NASAspaceflight posted an article about the human spaceflight “exploration roadmap” using the Senate Space Launch System rocket. It makes me feel a bit better about the SLS situation.

I’m glad to see that the roadmap revolves around interplanetary vehicles assembled in space, and I’m glad to see that there’s some careful thought here about how to move the human presence throughout the Solar System in a more sustainable way than flags-and-footprints missions. Still, I’m not convinced that the SLS is an efficient or effective way to do that compared with, say, a cluster of Falcon launches. Remember: the SLS is not going to be up to its peak design payload capacity until 2020 2030, and it will likely fly once a year, which doesn’t bode well for the parts of this roadmap that call for a “fleet of SLS” launches.

The best apart about this article is that it demonstrates that NASA is still thinking about how it can achieve human spaceflight capabilities – regardless of what a petulant Congress insists on.

And people say the space program is ending…!

NASA just closed their latest application drive for astronaut candidates. A staggering 6372 people applied – the second-largest candidate group in NASA’s entire history! (Personally, I’m rooting for this one.) What gives? Do people love science and technology a lot more all of a sudden? Is it American pride? Is it Newt’s promise to build a Moon base for less than $2 billion?*

It is clear to anyone who follows space activities that the end of the Space Shuttle program was not anything close to the end of NASA itself. The astronaut program is no exception: every few months, a Russian Soyuz blasts off carrying three human beings up to the Space Station or back down to Earth. The completion of the Shuttle program also meant the completion of Space Station construction, allowing the ISS to become an orbital scientific workstation in earnest.

Perhaps it’s the profusion of photoblogging, twittering, and facebooking astronauts driving the upsurge. When we have astronauts writing stuff like this, while in orbit, allowing people to get their own glimpse of life in space, it’s a small wonder that people still want to be astronauts!

* In case you’re curious, $1.9 billion would be 10% of NASA’s budget (devoted to prizes, of course). For comparison, the entire Apollo program cost approximately $25 billion. In 1973 dollars.

Gaming Machine: Part the First

I have decided to build myself a gaming computer.

This decision was spurred on by three factors: (1) I want to know what happens in StarCraft 2, and my current computer can’t run it, (2) I want to play Skyrim, and my current computer would probably die a horrible death before messily regurgitating that disc, and (3) I don’t want to pay a ton of money.

It’s been interesting to discover that, with the ability for me to decide exactly what goes into my computer and what doesn’t, I can get a good performance machine without spending more than about $1000 on the PC hardware. This article has been extremely helpful as an example.

For example, it seems pretty clear that Intel processors dominate the performance market. Lots of commercial gaming or performance PCs are all racing along on Intel Core i7 CPUs, which run up to $1000 by themselves – but everything I’ve read suggests that a $210 Core i5-2500 is superb for gaming and that anything more expensive is way beyond the point of diminishing returns in terms of cost for performance. The resulting price difference between the Core i5 and the i7  from a commercial gaming system can then go towards a higher-power graphics card, which has much more of an impact on game performance.

Of course, to balance out the relatively easy decision on the CPU, graphics cards seem like much more of a muddle. I’m going for a gaming card, but I decided not to look at the absolute top-tier simply because those cards are $600 plus. Mostly, I’m looking at the GeForce GTX 570 and the Radeon 6970. It seems like neither Nvidia nor ATI is a clear brand leader, but the GTX 570 edges out the Radeon in performance just a bit. When I started this project a few weeks ago, I was disappointed to see that both those cards are members of series that are just over a year old at this point – meaning that it’s likely that there will be new cards coming out soon. In other words: now is not the time when a graphics card consumer is in the best buying position. ATI proved my point just recently by announcing the Radeon 7970, which is their new high-end card. It’s above my target price point, sadly – but the still-rumored 7950 would be just about perfect for me if it had been announced at the same time. Darn!

However, something that makes the graphics-card situation particularly interesting to me is that, since the last time I was looking at computer components, the video card manufacturers developed technologies to allow similar graphics cards to work in parallel. I was interested to find that, on benchmarks, the gain of adding a second card can be up to almost an additional 80% of graphics power. I didn’t expect that to be an additional 100%, but neither did I expect it to be much more than, say, 30-50%. So I have an interesting possibility: I could get one graphics card now, and if this year’s releases blow it out of the water, I can buy a second one at a discounted price and boost my system performance substantially.

Other things are less important to me: The case isn’t a big deal as long as it holds all my stuff. I know about how much RAM I want, but I don’t want to fill all my DIMMs up so I can upgrade later if I desire. The game with motherboards seems to be making sure the board supports all the other components, and the power supply should have well more than enough capacity to handle everything else. I’ve seen articles that benchmark different motherboards or RAM packages, but they have such a tiny effect compared to the processor and graphics card that I’m not worried about that. (I’m also not thinking of overclocking, which is where more of the RAM and motherboard issues seem to matter.) The one thing that I keep finding puzzling is that RAM splits pretty neatly into “budget” vs. “high-end” memory – but I struggle to find what sort of impact that has, other than dramatically designed heat sinks on the high-end stuff, that makes those DIMMs look like Klingon weaponry. That seems like a cosmetic thing to me, but many users and reviews seem to prefer the high-end stuff without explaining too much about why.

I’m looking forward to piecing everything together. For one thing, I like the idea of assembling all the components. But for another, it seems like the world of computer games is a more lively forum for science fiction plots than movies and TV, and I want to get in on that.

Now if only Star Wars: The Old Republic wasn’t an MMO…I know it will all devolve into repetitive dungeon raids, but it just looks so awesome

Support

I am a member of the “millennial” generation. You know, the stereotypical hipster kids who like some band you’ve probably never heard of and are living with their parents, unemployed. Except…that’s not me.

I graduated from college and immediately went to grad school. In the sciences, math, and engineering, students generally get paid stipends to go to grad school. Oh, sure, it wasn’t a huge stipend, but it was enough not only to pay the bills but also to let me squirrel away some savings. I was in graduate school during the big financial bust of 2008, but I kept working and kept getting that stipend, thanks in part to the fact that my university valued its grad students enough to guarantee our funding, and in part to support my lab received from various organizations, including NASA – an agency of the federal government.

Immediately after I finished my degree, I got a job. In fact, I even had to push my start date back a little bit, because I needed some time to finish up university obligations and organize my final dissertation. My total period of unemployment was about a week, in early 2011, and then I started working. As it happens, the job I took is with a major commercial spacecraft company; the biggest program we are working right now is a batch of satellites that the US Air Force bought to replace older models.

So, here’s one person’s story: I’ve directly benefited from a government and from institutions that value advanced education, basic research, high technology, and infrastructure investments. And the recession didn’t touch me.

Huh. How about that.

Antitechnocracy

A reporter from This American Life did something interesting for today’s broadcast: she brought together a ninth-grade global warming skeptic and the executive director of the National Earth Science Teachers Association together in the show studio for a discussion. (Audio available here.) The dialogue was reasoned and civil. In quick summary: the scientist presented the skeptic with the best evidence available and went through the logical arguments, from temperature/CO2 correlations to ice core measurements. The skeptic then asked, “well, what about the following things?” – and presented some common climate-change-skeptic arguments (for example, why has there been higher snowfall in recent years in some places). The scientist went through each, point by point, and explained the science behind each and whether or not that science was relevant to the overall climate picture (for example, warmer temperatures allow the atmosphere to hold more water vapor, giving the higher snowfall – and, besides, our day-to-day weather experience is separable from the trend of the climate).

At the end, the reporter asked the skeptic how convincing the evidence was. Did she buy it? In short: no. She said that she could see how the scientist’s explanations could account for all the data, but… The ninth-grader then said something very astute here. This is a similar situation to the debates between scientists and educators and creationists. You have some people who can be convinced, and some who accept the theory, but then there are also some people who won’t buy the scientific results no matter what. In other words, when we want to believe something, we tend to believe it. Regardless of evidence.

Next, the reporter asked the ninth-grader if the scientist could do something to sway her opinion, and what that would be. The ninth-grader thought for a moment, and decided that if she just had all the arguments from both sides laid out in front of her, and she got to make her own decision, then she would be more likely to accept the scientific consensus.

I have mixed feelings about that conclusion. On the one hand, I would like to laud this ninth-grader for her desire to weigh all the evidence and arguments and make an informed decision. (I definitely want to laud her for her presence and attitude on the radio. She was quite reasonable and did a great job expressing herself.) But, on the other hand, the scientist was right to point out that when we are trying to account for the behavior of the universe, our belief has no bearing on reality. And, if this ninth-grader really wants to make all her decisions and form all her opinions this way…she’s got several lifetimes of study, schooling, and degree programs ahead of her.

I wonder to what extent this sort of attitude is systemic in American society. Politicians and pundits challenge scientific findings on the basis of belief, politics, “common sense,” and “gut feelings.” School board candidates get elected by saying that they will “stand up to the experts.” We are supposed to feel that we live in a free country, that everybody’s opinion is valid, and that anyone can make a decision on any issue. While I think that everyone has (and should have) that potential, I am not comfortable with the recent anti-expertise trend that I think may result from that philosophy.

Let me provide a concrete example: suppose I go to the emergency room because there is something going dramatically wrong with my body. I don’t want to try to suss out a diagnosis using only common sense, and I don’t want a doctor who will base his medical decisions on similarly fuzzy impressions. I want the best doctor. I want an expert doctor. I want a doctor who knows all the details of the human body, how drugs and lab tests and surgical procedures work and interact, and how all that knowledge applies to my situation. Similarly, if I have a legal problem, I want an expert defense lawyer – because, though I have the right to defend myself and I’m decent at expressing my opinions, I know that a competent prosecutor could run circles around me. Heck, if I have a car problem, even though I’m an engineer for a living and I learned all about combustion cycles and the principles of mechanics in my physics classes, I want an expert mechanic to fix my problems. I’m a smart and capable guy, but I don’t have the time or desire to become an expert in all these things – so I rely on other people.

“Common sense” is great for some things, such as solving interpersonal problems. But common sense didn’t get us to the Moon, or win the World Wars, or invent the modern computer, or eradicate smallpox. Expertise did those things, and many more.

In the case of climate change, the expert scientists have long held a consensus conclusion. Most of the arguments denying global warming come from politicians and commentators. If we all were willing to go through the effort of learning the scientific process, learning the techniques and tricks that scientists use to produce their results, combing through and analyzing the data, and weighing our conclusions against other studies, then this debate wouldn’t be happening the way it is. Nor would it be happening so if we accepted the conclusions of those experts who did devote their lives to all that data analysis and research. But it seems that Americans all want to make their own decisions on the matter – that they want to think that their beliefs, rather than data-driven conclusions, describe the way the universe works.

After the data is analyzed, though, there is an important role for common sense to play: determining the policy actions, if any, informed by expert conclusions. If economic conservatives want to accept that climate change is happening, but adopt the position that we should not take any action to prevent it, then I can respect that viewpoint as intellectually honest even if I disagree. But when such people deny climate change entirely, well…I wonder what kinds of doctors they want treating them.

A Universe Full of Worlds

This week has been great for exoplanet news!

Artist's concept of exoplanet systems. Credit: ESO/M. Kornmesser

Ever since the launch of the Kepler space telescope, it seems like extrasolar planet discoveries have been rolling in constantly. But this week at the American Astronomical Society meeting, there were several big announcements.

The first was the discovery of the smallest exoplanetary system yet, containing the smallest planets known. The star in question is a red dwarf, and none of its three (known) planets is larger than the Earth. One of them is about half Earth’s radius – approximately the same size as Mars.

The second announcement was of the discovery of an object orbiting another star that seems to have a vast ring system – larger even than Saturn’s majestic companion rings! Astronomers found the rings when they passed in front of their planet’s star, dimming its light. I think the truly amazing thing about this discovery is not just that our telescopes can detect transits of rings, but that the scientists analyzing this event tracked the variation of sunlight shining through the rings and discovered that these rings, like Saturn’s, have gaps. Gaps in ring systems form when the ring particles get into an orbital resonance with another orbiting body: the second body’s gravitational tugs push the ring particle at just the right frequency to knock it away from that orbital radius, clearing out a gap. Furthermore, computer models indicate that rings around planets are generally unstable – they spread out and disperse. Saturn’s rings, for instance, would not have lasted to be the age that they are – if not for the presence of shepherd moons. My point is this: in order for this extrasolar planet to have rings, especially rings with gaps, it must have moons.

Third, and most exciting in my opinion, there has been a survey of star systems imaged with a gravitational lensing technique, and it concluded that there are more planets in our galaxy than stars. Put another way: on average, every star has at least one planet! Astronomers used to wonder: is the Solar System exceptional in the universe? And, if so, what made it so special? Now, there are more and more indications that planetary systems like ours are not just out there – they’re downright common!

The thing that makes exoplanet research so fascinating to me is the sheer variety of worlds discovered. There are so many stars out there, and so many planets, that it seems almost harder to imagine a world that can’t happen than a world that might. And some of the newly discovered worlds might give George Lucas or Gene Roddenberry a run for their money! Nothing drove this point home to me more than an astronomy lecture I attended a few years ago, in grad school: the speaker talked about M dwarf stars, and how the “habitable zone”* of some of those stars would be at such small orbital radius that a planet in that zone would be tidally locked – orbiting once per day, always pointing one hemisphere towards the star. But, continued the speaker, we have discovered exoplanet orbits with rather high eccentricity – and those worlds would “rock” back and forth around their tidal equilibrium. On those worlds, you could stand on a beach and watch the sun rise over the ocean…then, a few hours later, the sun would reach its zenith, turn around, and sink right back down to set at the same point on the horizon!

Then, a few weeks later, I heard another speaker talking about Gliese 581g – alias “Zarmina” – shortly after its (potential) discovery. This planet, if it truly exists, lies smack-dam in that habitable zone* but would be locked to its star, so one hemisphere is always day and one is always dark. Naturally, many sci-fi fans attached themselves to the idea that only the strip of land near the terminator would be habitable. (io9 even posted a bunch of whimsical concept art from the hypothetical Zarmina Minitry of Tourism.) But in this lecture, I learned that the climate on such a world would likely make it even stranger – rather than being habitable in a twilight band circling the globe, the world would be encased in ice with a liquid sea directly beneath its sun: the astronomer called this “eyeball” Earth. What strange and intriguing cultures might arise on such a world?

And that’s not all. There are more known exoplanets orbiting binary stars, for instance. And some more space missions designed to hunt for – or investigate existing – exoplanets are advancing through the design process. Who knows what we will find in the future?

Chances are, if you can imagine it arising from the physics we know, it does exist out there. Now the questions become: how can we explore these places? And how many other explorers are out there, looking back at us?

* I find the term “habitable zone” bothersome, because we have coined the term based on a single data point. However, the alternative “liquid-water zone” is misleading, because we know that there is liquid water in our outer Solar System. (Heck, Europa may even be habitable, we don’t know!) But “liquid-surface-water zone,” which is what astronomers really mean by this term, is just awkward.

Calling All Space Tech!

In grad school, I became a big fan of NIAC (the NASA Institute for Advanced Concepts) and the Office of the Chief Technologist. These wings of the NASA organization support research into far-flung, visionary technological concepts. They are the parts of NASA pushing for the kinds of research that will usher in the next generation of space exploration.

The new NIAC call for proposals is out. Interestingly, this time it includes a specific call for “citizen science.” So, if you’ve got some crazy ideas for spacecraft technology…why not try for it?

Flying to Titan

Decadal surveys and other prioritizations of potential NASA exploration missions often rank one thing very highly: a sample-return mission from Mars. However, I think there are some much more scientifically interesting, technologically challenging, and engaging to the public mission proposals out there. This is one: a Titanian UAV!

The idea is to send an airborne vehicle to Saturn’s moon Titan which would fly around the moon, observing surface features from its high vantage point. A powered flyer, as opposed to a balloon, has the advantage of being able to travel to a specific location: such as the moon’s liquid lakes!

The proposal team uses some clever mission planning approaches to handle the limitations of the aircraft: for example, using glide phases to hoard power for downlink sessions. Their nominal mission duration is one year: a year of exploring another planet from the air, a year of images and science data depicting a world of lakes, rivers, ice, and rain. The full proposal is online here.

I find the idea exciting, and I hope that NASA’s governing councils soon prioritize exploration of those extraterrestrial locations most likely to harbor life – like Europa, Enceladus, and Titan.

The Biggest Science Errors in (hard) Sci-Fi

One of the problems with having just watched a whole lot of Star Trek is that, while I like a lot of the characters and plots and ideals, it’s a poster show for demonstrating some of the biggest scientific problems in modern science fiction. So, without further ado, I break a long silence to present my Top 3 Science Errors in Sci-Fi.

#1: Sensors

If you are the captain on the bridge of a Star Trek ship, you have the advantage of being well-informed beyond the limits of physical possibility. Your science and tactical officers can consult the Sensors and instantly list for you every object within a few light years. They can tell you what each object is made of. They can give you a map of a planet surface, or approach a never-encountered-before alien spaceship and produce an interior schematic. They can rattle off the number, species, sentience, and state of health of every living thing on a planet. They can tell you what systems are active on an enemy ship. They can even quote for you what the enemy ship’s computers are calculating.

Decades worth of scientific data-gathering and interpretation, happening in an instant

It might seen like “sensors” capable of many of these feats are plausible, given some of the technologies and techniques available to us today. We have telescopes that conduct all-sky surveys and see billions of light-years; so why not give Starfleet captains an immediate cosmic census? We can do spectroscopy to determine a substance’s constituent elements remotely. And we can detect electromagnetic signals, which might emanate from even the smallest electrical circuits. But what’s missing from this picture is the presence of uncertainty, noise, and time delays, all of which make measurements harder – and make the conclusions you can draw from those measurements much less certain. At the very most, when Spock or Data or Dax quotes the composition of a strange starship, they should include a measurement of probability with each component – and those probabilities should be well under 100%! Not only will that percentage depend on the quality of the instruments, the measurements, and the data processing, but there are even certain physical limits that prevent it from ever reaching 100% or even from getting to a reasonable level of confidence without a certain amount of observation time. If you want to map an alien planet, for instance, you need to spend time in orbit imaging and analyzing its entire surface, if for no other reason than that you can’t observe more than a small sliver of the planet at once!

Another important point involves the physical infrastructure required to give instruments the sensitivity they would need to do all these things with high certainty. Suppose we want to alert Captain Picard to the fact that the Borg ship is charging its weapons to fire. (And, obviously, I don’t mean Borg led by a relatable megalomaniac queen; I mean terrifying faceless drone Borg coming to assimilate you.) Presumably, the phrase “charging weapons” means that the energy in some kind of battery or capacitor bank is building up. We could, theoretically, detect photons emitted from such a system. But, first of all, I would think that the Borg shield systems like that, since they value efficiency so much – so very few photons will come out for us to detect. Second, a single photon won’t be enough for us to tell what’s going on. We need enough to get a good signal-to-noise ratio: that is, we need enough photons from the Borg weapons system to confidently say that they are from an energy buildup in that weapon system, and not from anything else. If there’s a fixed number of photons coming out of the Borg weapon, then there are basically two ways to build that confidence by measuring more photons: give your sensors a long time to measure, or catch photons from a larger area. We want to give Picard a result fast, so we’d have to go for the bigger photon-capturing. Much bigger. Especially if you want to pin down the exact location of those photon emissions: angular resolution at any given wavelength of light depends directly – and only – on the telescope baseline size. Therefore, first up on the Enterprise’s battle plan should be the deployment of a giant reflector dish. I think something with a diameter of a couple hundred kilometers should suffice!

The impossibility of Sensors as we usually see them depicted could have a huge impact on many sci-fi storylines. For instance, characters should have to make decisions on much more restricted information – or spend much more time considering their actions. Our characters will also find themselves in many more situations where they can’t solve the problems we throw them up against, simply because they don’t have enough information about the problem or they have to take too long to figure things out. There are other impacts, too. For instance, I’ve seen arguments on the web that stealth spacecraft are impossible (because any spaceship with humans in it will be at a temperature much higher than ambient space, so it will emit thermal radiation). These arguments assume the existence of Sensors, and further assume that the Sensors will always trump alien thermal management schemes. And in hard sci-fi circles, particularly in computer-game universes, there is also the concept of active versus passive Sensors: active Sensors are like radar, which bounce a signal off of enemy ships (thus making your ship easier to detect); while passive Sensors are like cameras, which just collect emissions. However, though that distinction may be meaningful, it’s not practical! Unless you really want to deploy those huge detector telescopes, you had better break out the radar if you want to locate your enemies before they fire all their missiles.

#2: Orbits

When you arrive at a planet from deep space, you want to park your spaceship. The parking space is an orbit. Contrast with deep-space maneuvering, when your spaceship can go any direction it likes any time it wants.

Well, no. Not exactly. Not at all, in fact.

Orbits aren’t just for parking – they dictate everything about moving around in space. The International Space “Station” is always moving at many thousands of kilometers per hour because of orbits. Geostationary satellites are at a really high altitude – over 35,000 km – because of orbit mechanics. We only launch space probes to Mars about once every two years because of orbits. Interplanetary space probes can only reach certain destinations with the amount of fuel they carry because of orbits.

Whenever two sci-fi spaceships meet at a planet, they aren’t going to be exactly next to each other except by design. If their orbits are inclined or eccentric relative to each other, or at different altitudes, then the ships are going to be continuously moving around relative to one another. If these ships get into a space battle, then they are likewise going to be moving around each other in arcing paths. The trajectories of the arcs will change as the ships maneuver, but there is definitely going to be constant, hectic motion, and it definitely won’t all align nicely with some arbitrary 2D plane.

Nice neat flying-wedge-style formations in orbit?

The worst offender on this point, in my opinion, is Ender’s Game. One of the premises of the book is the argument that, in space, the enemy could attack you from any direction and at such speed that you cannot anticipate the attack; therefore, defense of a planet is impossible and everyone has to be on the attack all the time. This is an interesting idea, but it’s true only if the attacking spacecraft have unlimited power and propellant. In reality, those resources must be limited and so the attacking fleet is going to have to take some orbital trajectory to get from their planet to yours. Just like NASA planning the launch of a Mars rover, they’ve got to pick their launch window carefully – which means that you actually could predict which trajectories the attackers are more likely to use.

The mechanics of orbits matter to sci-fi stories: they are like the layout of highways and roads across a country. If some characters need to get from one planet to another, there are certain orbits they could use and certain orbits they could not. They determine how long the trip takes, and what subsequent destinations the characters can reach. And orbits keep ships moving with respect to one another along curving paths in all three spatial dimensions, making spacecraft behave in a manner that is completely unlike watercraft (or even aircraft), which is how we usually see them depicted.

#3: Co-opting a Current Science Word to Mean “Magic”

Nanotechnology. Genetic engineering. Biotechnology. Mutation. Cybernetics.

All of these words, even the more sciency-sounding ones, are often thrown around in sci-fi as synonyms for the word “magic.” My favorite examples come from Peter Hamilton’s Void Trilogy, when characters with all sorts of technological implants “manifest a quantum field function” in order to do things (unlock doors, tap into computers, fire lasers, etc). What the heck does this mean? Hamilton just strung together some cool-sounding words. His characters might as well be waving magic wands or using the Force. At least the Star Wars universe is honest about this!

The thing is, terms such as the ones I listed describe technologies that we have now and don’t mean at all what the sci-fi writers think they mean. For example: nanotechnology. Nanotechnology is the manufacturing capability to build things with sizes measured in nanometers, and it happens all the time in the electronics industry without giving anybody superpowers. What nanotechnology does give us is a ton of transistors on a silicon chip. Same for genetic engineering: we have been splicing genes and resequencing DNA for decades now – and cruder genetic engineering in the form of selective breeding goes all the way back to Gregor Mendel. You can thank genetic engineering for apples and insulin, but again – no mind-melding, magnetism-wielding, or time-winding powers.

I do not think that it’s inconceivable or wrong for writers to take the Arthur C. Clarke leap, and posit that sufficiently advanced technology is “indistinguishable from magic.” But in order for that to work, the technologies have to have either technical explanations that use concepts we can’t relate to our current understanding, or leave off the explanations entirely. Think of explaining that Droid phone to a Roman: it wouldn’t make sense to say, “Oh, you have aqueducts. Well, over time, aqueducts got better and smaller and eventually people built this handheld device which works by really good aqueducts.” That extrapolation of technology is misleading and incorrect. The “indistinguishable from magic” idea comes into play because the Roman doesn’t understand electrons or transistors or LCDs, and those terms are completely meaningless to him.

Often, terms like these are handled well – and science fiction is a tremendous vehicle for exploring the potential implications of emerging sciences. Where I have my biggest problem is when a story says something like, “after the introduction of nanotechnology in 2167, nanotech-enhanced human muscles, nerves, and brains entered the market.” Lines like that show that the writer just thought the word “nanotech” sounded cool and didn’t want to think very hard about how the theories or technology we have now would feed in to the technology of tomorrow. It’s a cop-out that doesn’t align well with either our current understanding or the effects the writer is trying to describe. Where those cases are concerned, I kind of like it better when we have “the Force” and “red matter” and other such things without any explanation.

Runners-Up

I decided on a top three based on those issues that I think have the biggest impact on sci-fi stories. There are, of course, a whole host of other science problems in most popular sci-fi.

The closest runner-up, in my mind, has to be designing spacecraft like ships – with planar decks stacked on top of one another, such that if you stand on the surface of a deck you can face in the direction of travel of the spaceship. There is no reason whatsoever to do that. In fact, if you’re interested in getting some artificial gravity, it makes much more sense to stack the decks vertically, so that the lowest deck is toward the engines and the thrust is always “up.” But if sci-fi starship designers want to really go nuts, they ought to start canting decks at angles, wrapping them around cylinders, or just having a string of cabins that the starship crew floats between. Written sci-fi is much better about all this than movies, TV, or games are. (Artificial gravity itself is something I’m willing to give sci-fi movies and TV shows a pass on, simply because I understand the production limitations and I’d rather see more innovative sci-fi come out of Hollywood than less. It can fall into the “magic” category. But it’s no excuse to design ship-style decks.)

Sound in space is also a science error, but I’m happy to let it slide for the sake of artistic license. Same goes for big fireball explosions. Some shows go a long way, stylistically, by muting or eliminating sound from their spacecraft, though!

Most sci-fi gets the idea of rocket engines way off. Orbit maneuvers – including getting onto a transfer orbit to another planet – require a change in velocity known as delta-vee. Delta-vee comes from firing a rocket engine. The more the engine fires, the more delta-vee the spaceship gets. Simple enough, but the problem lies in propellant consumption: a spaceship only has a finite amount of propellant aboard, and when you use it all up in engine burns, you can no longer move your spaceship around. So spacecraft rocket firings necessarily happen only during brief intervals, when absolutely necessary. A real spaceship will never have rocket engines on continuously in an “idle” state, or to overcome friction like a boat or airplane has to! (Electric propulsion, like ion engines, behaves a bit differently – those engines are almost always very low-thrust devices that have to be on for months, say, to get a space probe from the Earth to the Moon.) Worse, something like the Starship Enterprise would have to devote most of its mass to housing propellant reserves to accomplish many of the maneuvers we see. To get around this issue, many sci-fi universes include some kind of “reactionless” drive or other engine based on as-yet-unknown physics that can use the Clarke argument. I’m not sure why those engines need to have glowing backward-facing exhaust vents, though!

Orbit-to-surface-to-orbit shuttles are pretty bad. Barring some future magical physics, a single-stage-to-orbit vehicle is the holy grail of launch. It takes an enormous amount of fuel and propellant to escape the Earth’s gravity – far, far more in terms of mass than the rocket payload. Most launch concepts we can envision involve some component of the vehicle that doesn’t make it into space – whether it’s an expendable booster stage or a carrier aircraft that stays behind for reuse. Re-entry can be just as problematic, as the vehicle has to get rid of a ton of kinetic energy (to make a long story short, that’s what space capsules’ heat shields do). Shuttles can be obnoxiously necessary for crews of planet-hopping explorers, though…

Like shuttles, faster-than-light travel is tough. It’s the elephant in the room of most science fiction: writers are just dying to have it, but it cannot be accomplished by any means we currently know about. There are some theories out there that might give us FTL capabilities, but only under the most extreme and unrealizable conditions. (Things like…being inside a black hole and whatnot.) However, being able to move characters from planet to planet very quickly can make for richer storylines, more imaginative settings, and more exciting descriptions and visuals, and so it becomes a kind of necessary evil.

Addendum: Reader Nominations

A couple readers have commented on some other effects or technologies commonly depicted in science fiction that commit scientific faux pas.

  • Will pointed out “shields” and “force fields,” which form an impenetrable (or, at least, as penetrable as the plot requires) bubble wall around a starship. The idea of a deflector shield has its basis in scientific fact; but there is no real way to project a solid wall around your favorite spaceship that prevents matter and energy from passing through.
  • Jon mentioned that many movies and TV shows include “energy weapons” which produce blasts that travel slower than not only light, but also sound!