Category Archives: Social commentary

Risk

So, there’s been another explosion on an oil rig in the Gulf of Mexico. This got me thinking about risk and risk management.

In the engineering sense, “risk” refers to the chance that a particular system will fail and how heavily we weight the consequences of such failures. Risk is present in any design, any system, any process. There’s no way anyone can drive risk to zero, because nobody has perfect knowledge of any system and nobody can predict the future with 100% accuracy. The question is how unlikely and how inconsequential a failure must be to represent an acceptable risk. A complimentary question is how well we plan to deal with those failures when they happen.

BP undoubtedly performed some sort of risk analysis on the Deepwater Horizon platform before it began operations. Engineers must have, at some level, looked at the drilling hardware and procedures and decided that the chance of a catastrophic failure was such-and-such percent. They must also have looked at the cost of dealing with those failures, and come up with so many billion dollars. But all this gets weighed against the potential benefits: if the Deepwater Horizon platform brought in revenue of only a thousand bucks a year, but had an chance of failure of 50%, and the cost to the company of that failure is $20 billion, then BP probably would not have set up the platform the way they did. But if the calculation came out with a one-in-a-million chance of failure, a $20 cost of failure, but revenue of $50 billion per year, then of course they’d go ahead with the project.

The failure of the actual Deepwater Horizon system could mean any one of several things. It could mean that all BP’s risk estimates were correct, and they just got supremely unlucky with that one-in-a-million chance: unlikely, but possible. It could also mean that their analysts made some error: they may have put the chance of failure too low, or the consequence of failure too low, or the potential benefit of success too high. The real trouble with this sort of thinking is that we can’t know for sure where the analysis went wrong, if it did.

However, when we look at BP’s horrendous safety record, the facts that came out about how blasé other oil companies were about drilling, safety, and cleanup in the Gulf, and this second explosion on a platform owned by another company with a dubious safety record (at least, so I heard on NPR), I tend to think there was a problem with the risk analysis. These companies are engaging in higher-risk behaviors in order to get higher payouts. In short: they are getting too greedy. This might not be a problem in some industries, but here, the cost of failure isn’t just borne by the risk-taking companies, but also by the residents of Gulf Coast states (along with the rest of us taxpayers). I hope that these incidents cause the companies in question to revise their risk analyses to be more conservative, especially now that there is wider recognition in our society of the costs of such risky behavior to the wider economy, environment, and climate.

Now, there are good reasons to pursue more high-risk activities, if the potential benefit is high. For instance, there’s my favorite kind of engineering: spacecraft engineering! I would love for NASA to take much greater risks than it currently does!

Current NASA policy, for instance, dictates that any mission should present zero risk to the safety of astronauts on board the Space Station. This policy, which appeared after the Shuttle Columbia broke up on reentry, makes little sense. Remember what I said before about zero risk? It does not and cannot exist. Yet, that’s NASA policy – and the policy has caused NASA to nix some pretty exciting missions for posing, for example, a one in 108 chance of collision with ISS. The chance of Station astronauts getting fried by solar flare radiation or baking when ISS refrigeration units fail or losing their air from a micrometeoroid are likely to be much higher than 1 in 100 million – so what’s the problem? These missions don’t add any danger compared to the dangers that already exist.

Besides, we’re taking about spaceflight. It’s not safe. I mean, we’ve made it pretty safe, but still – it involves strapping people on top of tons of high explosives, pushing them through the atmosphere at hypersonic speeds, jolting them around repeatedly as rocket stages separate and fire, and then keeping them alive in a vacuum for days, weeks, or months at a time. Honestly, it’s astonishing that we managed to pull off six Moon landings with only a single failed attempt – and a nonfatal one at that!

I would argue that those tremendous successes in the early space program came from high-risk activities. For the first American manned flight into orbit, NASA put John Glenn on top of a rocket that exploded on three out of its five previous launches. The Gemini Program pioneered the technologies and techniques necessary for a lunar landing (and that we now take for granted in Space Shuttle activities) by trying them out in space to see what happened – that program nearly cost Neil Armstrong and David Scott their lives on Gemini 8. The Apollo 8 mission, which was supposed to orbit the Earth, was upgraded to a lunar swingby – the first time humans visited another planetary body – mere months before launch. But these days, to hear NASA brass and Congressional committee members tell it, no such risks are acceptable. NASA must use “proven technologies.” NASA must accept no more than bruises on its astronauts when they return from missions. NASA must not chance any money, material, or manpower on a mission that might not succeed, even if such success could give us the next great leap forward. And so we end up with manned “exploration” of only low Earth orbit for thirty years, an Apollo reimagining to succeed the Space Shuttle, and, if the House has its way with President Obama’s proposed NASA budget, a space program dedicated to building The Same Big Dumb Rockets That It Already Built for the forseeable future.

Fortunately, we still get to see some envelope-pushing on the robotic exploration side of things. Missions to Mars have only recently broken through to a cumulative success rate greater than 50%, thanks to a string of high-profile successes, and that’s partly because of the ambition involved in landing something on another planet. It’s wonderful to see the progression from the Sojourner to Spirit and Opportunity to Curiosity rovers – but remember that the Beagle rover, Mars Polar Lander, and Mars Climate Orbiter all crashed into the Red Planet. These failures cost money and effort, and perhaps a direction of research in a few academic careers, but not lives, which makes them more acceptable to bear back on Earth. Even if the risk is high, the cost of failures is acceptable compared to the benefits.

Still, there could be more room for audacity (is audacity = 1/risk?) in robotic space exploration. Take the MER mission, for example: a pair of vehicles designed to last for 90 days have been operating for over six years – and counting. In one sense, this is a great success. But in another, it shows that spacecraft engineers are far, far too conservative in their designs. Imagine if they had actually designed the MER rovers to run for 90 days: everyone would have been happy with the mission, and the rovers would have cost less and taken less development time to the tune of something like the ratio between ~2200 and 90 sols. Or, conversely, consider if NASA had been ambitious enough to design a five-year rover mission from the start. That might have seemed laughable when the MERs were launched, but now we know that duration to be well within our capabilities. Because, in fact, we design space missions that rarely stretch those capabilities, since we do not tolerate risk.

This risk aversion in spacecraft engineering is one reason why I (and so many other people) are excited to see companies like SpaceX and Scaled Composites – which aim to turn a profit, something NASA doesn’t have to do! – doing the things they are doing. SpaceX, especially, which had to launch its Falcon 1 rocket several times before it succeeded, but used that experience to pull off a big Falcon 9 launch and secure the largest commercial launch contract ever. It’s also one of the reasons I was so excited about President Obama’s plan for NASA: it looked like NASA would be sticking its neck out for unproven technologies again.

How is it that we as a society tolerate tremendous risk when it comes to activities that affect thousands or even millions of lives on Earth, but we balk at the slightest chance of failure when considering space travel? It’s a puzzle to me.

Impressions of Scientists: Before and After

Ten years ago, a seventh-grade class did an intriguing project. The students drew pictures and wrote descriptions of what they thought scientists were like. Then the entire class visited Fermilab, a US accelerator physics lab. After the visit, the students created a new set of drawings and descriptions of scientists.

The results are here, on a Fermilab outreach page.

Almost all of the “before” pictures drawn by these students show a man in a white lab coat holding a test tube. Many of the scientists depicted are balding, wearing glasses, and have a shirt pocket stuffed full of pens. The accompanying written descriptions talk about people who are “kind of crazy, talking always quickly,” “a very simple person . . . simple clothes, simple house, simple personality;” someone who “never got into sports as a child; he was always trying to get his straight A grades even higher,” is “brainy and very weird,” and “has pockets full of pens and pencils.” The descriptions from female students are particularly fixated on the stereotyped image of a geeky guy in a lab coat. Many of the students described someone who does try to do good things, who tries to make the world a better place, but they are still a person who is ultra-smart in some obscure way that does not relate to the students.

The “after” drawings and descriptions were quite different. Gone were the lab coats, test tubes, and glasses. Some of the background items like desks or computers remained, but the students drew men in jeans and tee shirts and women in ordinary blouses. Suddenly, “scientists” are people who “are interested in dancing, pottery, jogging and even racquetball” and “are just like a normal person who has kids and life.” The scientist “doesn’t wear a lab coat” and “got normal grades in school.” Scientists “come in all shapes and forms,” “aren’t very different from everyone else,” “played sports, still play some sports or still watch and go to games,” “are really nice and funny people.” One of these seventh graders “even saw a person with a Bulls shirt on.”

In the new descriptions, I saw that many of the students realized that scientists were not driven to science by their intelligence, by social rejection, or by an innate need to best everyone around them in intellectual gamesmanship; but by a passion to discover, to create, to invent, to explain, and to improve our everyday lives. Scientists chose their careers because they love science and are dedicated to answering the questions they pose. And that love remains with them. They are pursuing a dream, doing what they want to do and have wanted to do for much of their lives. In the words of one student, “if you want to be a scientist, be like these wonderful people and live up to your dreams.”

Many of these students also came away with a new sense that with this passion and dedication, they could be scientists, too. While few of them put the idea in those words, a number of descriptions echoed the phrase “they are just like you and me.” Some thought that “a scientist’s job looks like a lot of fun” because “they can do whatever they want and they still get paid for it.” One girl in the class even went so far as to say “Who knows? Maybe I can be a scientist!” I was particularly glad to see the work of the  girls like Amy, who started with a fairly stereotyped image of the balding, nearsighted man in a white coat, but ended up with a woman in ordinary street clothes who has a full set of hobbies along with her love of science. Even if Amy didn’t write “I could be a scientist, too,” her after-visit picture probably looks a lot more like she thought of herself in seventh grade.

The “Who’s a Scientist?” page was last updated in May 2000. Now that those students are old enough to have graduated from college, I’d love to see someone get back in touch with them to see how many pursued science in college and how many of them have gone on to advanced studies or to scientific careers!

I love the idea of this project, and I wish more schools in this country would do similar things. It would be incredibly valuable for our students to see that it’s not just brains that make a scientist, and the required brains don’t crowd out all the other qualities that make people interesting or friendly or outdoorsy or social or anything else these students might want to be. We physicists and chemists and astronomers and biologists and geologists are not merely adult versions of the stereotypical middle-school nerds!

(Those are the computer scientists.)

science and morality

I’ve been getting a lot of my subject matter from Ryan lately, it seems…

Well, in any case, he put a link on Twitter to Sam Harris’ TED talk about science and morality, and how science could feed into morality. It’s well worth looking at and thinking about a little.

Morality has to do with distinguishing “right” from “wrong,” and Harris has a very good point that scientific methodology could be applied to help make that distinction. However, while I listened to his talk, a very important point came to mind. Let me set this up with the statement that many concepts or measures in this universe don’t come out to binary extremes. (Quantum states of spin-1/2 particles, for instance, are an exception.) In most cases, it’s not a question of just being on one side or the other; it’s a question of how far towards one side or the other your measurement comes out. I think the same is true of morality: how right is one thing compared to another? How wrong are the alternatives?

In answering such questions with scientific processes – not an idea I disagree with, in principle – we would likely end up at some kind of optimization problem. Given all the scientific data about the possible reactions and effects of a particular decision, how can we make the most “right” decision? That’s a pretty straightforward problem to approach scientifically. However, we must be careful about how we define “most!”

As an example, if you drive you have probably had the experience of getting stuck at a stoplight somewhere, getting frustrated, and saying to your passenger or yourself, “Wow, these lights are stupid. I’d love to meet the guy who designed them, they could be a lot better than they are.”

The operative word there is “better,” and the question is, how do you tell which stoplight timings are better than others? Probably, the guy who designed them actually chose the best timings. But what he considered “the best” is maybe not what you consider “the best.” Maybe he maximized the traffic flow on the main street instead of the cross street. Maybe he minimized the average number of red lights cars encounter along a certain route. Maybe he found the timing that gave the least amount of wait time at certain intersections, while also giving the highest possible rate of cars through the intersection, during rush hour on average Thursday mornings. Which one of these definitions of “best” is best? And why is it so? There is an assumption underlying the process here, and it can have a dramatic effect on the results.

I think we have to keep that point in mind while considering Harris’ points. We have a lot of data on actions and consequences. We can use scientific processes such as optimization to try and synthesize that data into a decision about what is right and what is wrong. But we have to bear in mind the assumptions that underlie that process, be up front about them, and be willing to entertain other possibilities.

terrifying influences on school boards

I am reluctant to bump “Conference” down on my front page with this can of worms, especially now that my readership has been on the up-and-up, but hey, it’s my blog….

Yesterday I made the mistake of trolling around the New York Times web site for a few minutes between a lunch meeting and getting back to work. It was a mistake because I discovered this magazine article on the influence of religion in textbook revisions. It caught my attention with its headline, but it’s not really about how Christian the American Founding Fathers were. It’s about how Christian the Texas state school board thinks they were.

It’s a long article, and it covers a lot of ground. And I find a lot of it, honestly, terrifying.

I’m not just talking about the despicable attempts to get Christian creationism into science classrooms. (Side notes on semantics: “intelligent design” is a form of creationism, so I will not distinguish between the two; also, I will generally use the word “creationism” as a shorthand for “Christian creationism” – a necessary distinction, as there are hundreds of religions, each with their own creation story, to choose from.) Nor am I talking about the insidious efforts to insert the beliefs and practices of specific Christian sects into our government. I am talking about the repeated references to concepts like manifest destiny – the idea that American history has been guided by divine providence, that westward expansion was an effort to bring the One True Religion to the inferior heathen natives, that God has chosen America for divine purpose. It’s the divine right of kings all over again. And it’s the very reason why we have the First Amendment. A lot of that article made me so angry that I couldn’t do any useful work for about half an hour. Continue reading terrifying influences on school boards

got off easy

A judge has sentenced Dale and Leilani Neumann, Christian fundamentalists who were convicted of negligence in the death of their diabetic daughter when they prayed for her healing rather than contact any medical professional. They get six months in jail, to be served one month out of every year for the next six years. I think they got off easy. They are guilty in my mind of criminal insanity and hubris, and at the very least, their two remaining children should now be wards of the state.

One purported definition of “insanity” is to repeat the same action or set of actions, over and over again, seeing the same result each time but somehow expecting a different one. When their daughter felt faint, the Neumanns prayed. When she could no longer walk, the Neumanns prayed. When she could not eat, the Neumanns prayed. When she could no longer even speak, the Neumanns prayed. And when her breathing came in ragged, shallow gasps, the Neumanns prayed. Only after her breath and pulse stilled did they think to contact EMS; by then, of course, it was far too late. These parents have demonstrated that their convictions are more important to them than the safety and health of their children. They have also demonstrated an inability to form a workable understanding of the world from observable phenomena. Insanity that endangers lives: these people should be put away for psychiatric evaluation.

I’m reminded of that pseudo-joke – or, more appropriately, the modern parable – of a man with devout beliefs who hears on the evening news one day that his city is in the path of a terrible hurricane. “I’m not worried about that,” he says to himself, “because I know that God will save me.” The hurricane hits, and as trees and power lines crash the ground around his house, a policeman comes to his door. Yelling over the wind and rain, the officer offers the man a ride out of town. “No, thank you,” says the man, “I trust in God to save me.” Hours later, the city floods and the man flees to the roof of his house as the water level rapidly rises. He sees a family paddling down the whitewater of their street, and they backpaddle for a moment to draw closer to the man. “Come quickly!” they cry, “we have room for one more! We can save you!” But the man refuses again, telling them that he knows God will save him from this predicament. The water continues to rise, and the man eventually drowns in the ruins of his home. His soul finally comes in contact with the God he always believed in, but, his faith shaken by the hurricane, the man cannot help but shout, “God, I believed in you all my life! How could you leave me on that house to die?” God retorts, “What are you talking about? I sent a TV newsman, a police officer, and your neighbors, all to help you!”

This brings me to my second point: for the Neumanns to refuse to contact medical professionals is arrogance, pure and simple. They were, in essence, refusing to admit that their fellow human beings could help their daughter. Not only were they refusing their fellow men and women, but the were refusing their daughter – putting their own beliefs, even in the face of dwindling supporting evidence, as more important than her life. If there is a God who created people in the image of God, then people and their capabilities are at least representatives of divine power. Even if you take issue with that statement, then you must admit that people do have the capability to treat type 1 diabetes, which caused the Neumanns’ daughter’s death. So, unlike in the parable I reproduced above, there was no uncertainty to the outcome in her case – without insulin, she would die; in the hands of medical professionals, diabetes would be easily identified and treated. She would still be alive. Her parents refused a course of action that would have kept their daughter alive in favor of a course of action that they could plainly see was allowing her condition to deteriorate. This level of pride, to “stay the course” when a quick, easy, and known solution exists but would require some ideological capitulation, is staggering.

I have type 1 diabetes myself. I know that my treatment regimen revolves around human ingenuity and technical proficiency. God did not create the insulin pump that keeps me alive. God did not hand down to humans the techniques for cajoling pig pancreatic cells to produce human insulin. And God certainly hasn’t waved a mighty hand to miraculously cure me. No, for those first two items and hopefully for the third, human intelligence is responsible. Human training. Human learning. Human teaching. Human experimentation. Human courage. If a God is in any way responsible, it is solely in allowing human brains to evolve such that we could produce the advances in science, medicine, and technology that would lead to insulin production, glucose monitoring techniques, subcutaneous insulin infusion pumps, and the education of those who must treat themselves. For me to rely on wishful thinking to hope my diabetes away would be negligence. If someone else was responsible for treating me, for them to rely on wishful thinking to hope my diabetes away would be criminal negligence.

on healthcare and research

I have two things to write down some thoughts about.

First, while I do some of the more mechanical computer modeling work during the day, I’ve been listening to a lot of NPR streamed over the Tubes. Today, I learned some factoids that basically break down as follows:

  1. If you figure out how many people in America get health care and the quality of care they recieve, you find that we actually have the most “rationed” healthcare system of industrialized nations. That is, in a country with omg-we-can’t-have-that single-payer healthcare, or even anything not as vile and disgusting as that, more people get the care they need when they need it than in the USA.
  2. If you figure out how much health care costs in this country, and compare it to the cost of health care in other countries – not just premiums, mind you, but tax money that goes into health care as well – you find that Americans have the most expensive health care system in the world.

If you’re thinking what I’m thinking, it’s that the GOP is neither morally nor fiscally responsible; and that they are not really “conservative” in any actual definition of the word. If you’re not thinking that, you’re probably a Republican and have just pegged me as a pinko commie godless bleeding-heart Massachusetts liberal. (I will give you three of the words in that phrase, contend that there’s nothing wrong with at least those three, and the rest I contest.) In fact, I am merely a scientist and engineer, and I know how to read numbers and am willing to make policy decisions based on data. I’m also insulin-dependent diabetic, and would seriously appreciate a much lower cost and more assurance of the efficacy of the treatments just keeping myself alive.

Second, I have been hoping to come up with some good theoretical results to present in a conference paper on my research later this summer, and it just hasn’t happened. I’ve been too busy with other work-related things, and now I’m in a summer internship at NASA and don’t have the time to spare, so results are not going to be forthcoming before the paper deadline. This leads me to conclude that I much prefer being an experimentalist to being a theoretician. The reason is that labs sometimes go the experimenter’s way, and sometimes they don’t – but part of that is uncontrollable. The experimenter can, though, usually sift through data to find some useful results. Even negative results are useful. Any results at all will at least shed light on the techniques employed. If theoretical work doesn’t go the theoretician’s way, however…you are just left with a theoretician staring blankly at a piece of paper with a lot of scratchwork. And a lack of results just means that the theoretician hasn’t done the right thing or worked hard enough yet.

In other words, I have no results and it’s my own damn fault. I can’t even blame fault apparatus, numerical noise, or experimental error. I just didn’t do enough, or the right kind of, work. And that just makes me less motivated to continue this line of inquiry.