I put together my gaming computer and got everything up and running the way I like it over the last week. It’s been a satisfying experience – planning it all out, assembling it, and working out the little kinks. (I got a few taken care of; I’ll see how many more I hit.) The result is a computer that has been able to handle everything I’ve thrown at it so far in good form!
This contraption is running off an Intel Core i5-2500K processor, which seems plenty fast enough to eat up some games while also not costing me $1000, like the higher-end Core i7’s do. I have it in an Asus P8Z68-V PRO/GEN3 motherboard, which offers not only compatibility with all the other hardware but also sufficient expansion capabilities for me to upgrade a few times over the coming years. There are two 4 GB sticks of 1600 MHz RAM in there from G.Skill, and room for two more DIMMs up to 8 GB each. I sprang for a Sapphire Radeon 7950, which is a GTX 580 competitor that’s only about a month into its product life, giving me plenty of usable time on this graphics card (and the option to pair it up with a second one when we hit the end of the Radeon product cycle). That graphics card also has some nifty power-saving features to keep from hogging all the electricity in my apartment all the time. I put Windows on a 64 GB solid-state hard drive, and have two 7200 rpm, terabyte-sized hard disks in a mirrored RAID configuration for data. It’s all running from a Cooler Master Silent 700 W power supply in an Antec Three Hundred Illusion case. (I am fantastically happy with this case/power/cooling situation, by the way.)
Now I can run around feeling like I’m inside a Lord of the Rings movie.
This decision was spurred on by three factors: (1) I want to know what happens in StarCraft 2, and my current computer can’t run it, (2) I want to play Skyrim, and my current computer would probably die a horrible death before messily regurgitating that disc, and (3) I don’t want to pay a ton of money.
It’s been interesting to discover that, with the ability for me to decide exactly what goes into my computer and what doesn’t, I can get a good performance machine without spending more than about $1000 on the PC hardware. This article has been extremely helpful as an example.
For example, it seems pretty clear that Intel processors dominate the performance market. Lots of commercial gaming or performance PCs are all racing along on Intel Core i7 CPUs, which run up to $1000 by themselves – but everything I’ve read suggests that a $210 Core i5-2500 is superb for gaming and that anything more expensive is way beyond the point of diminishing returns in terms of cost for performance. The resulting price difference between the Core i5 and the i7 from a commercial gaming system can then go towards a higher-power graphics card, which has much more of an impact on game performance.
Of course, to balance out the relatively easy decision on the CPU, graphics cards seem like much more of a muddle. I’m going for a gaming card, but I decided not to look at the absolute top-tier simply because those cards are $600 plus. Mostly, I’m looking at the GeForce GTX 570 and the Radeon 6970. It seems like neither Nvidia nor ATI is a clear brand leader, but the GTX 570 edges out the Radeon in performance just a bit. When I started this project a few weeks ago, I was disappointed to see that both those cards are members of series that are just over a year old at this point – meaning that it’s likely that there will be new cards coming out soon. In other words: now is not the time when a graphics card consumer is in the best buying position. ATI proved my point just recently by announcing the Radeon 7970, which is their new high-end card. It’s above my target price point, sadly – but the still-rumored 7950 would be just about perfect for me if it had been announced at the same time. Darn!
However, something that makes the graphics-card situation particularly interesting to me is that, since the last time I was looking at computer components, the video card manufacturers developed technologies to allow similar graphics cards to work in parallel. I was interested to find that, on benchmarks, the gain of adding a second card can be up to almost an additional 80% of graphics power. I didn’t expect that to be an additional 100%, but neither did I expect it to be much more than, say, 30-50%. So I have an interesting possibility: I could get one graphics card now, and if this year’s releases blow it out of the water, I can buy a second one at a discounted price and boost my system performance substantially.
Other things are less important to me: The case isn’t a big deal as long as it holds all my stuff. I know about how much RAM I want, but I don’t want to fill all my DIMMs up so I can upgrade later if I desire. The game with motherboards seems to be making sure the board supports all the other components, and the power supply should have well more than enough capacity to handle everything else. I’ve seen articles that benchmark different motherboards or RAM packages, but they have such a tiny effect compared to the processor and graphics card that I’m not worried about that. (I’m also not thinking of overclocking, which is where more of the RAM and motherboard issues seem to matter.) The one thing that I keep finding puzzling is that RAM splits pretty neatly into “budget” vs. “high-end” memory – but I struggle to find what sort of impact that has, other than dramatically designed heat sinks on the high-end stuff, that makes those DIMMs look like Klingon weaponry. That seems like a cosmetic thing to me, but many users and reviews seem to prefer the high-end stuff without explaining too much about why.
I’m looking forward to piecing everything together. For one thing, I like the idea of assembling all the components. But for another, it seems like the world of computer games is a more lively forum for science fiction plots than movies and TV, and I want to get in on that.
Now if only Star Wars: The Old Republic wasn’t an MMO…I know it will all devolve into repetitive dungeon raids, but it just looks so awesome…
Hello, Intertubes! I have been slacking off on the blog in favor of preparing my dissertation and the presentation for my defense. I know, excuses, excuses…
To keep all eighteen of my intrepid readers happy, here is a video that recently went up on my lab group’s YouTube channel:
That’s me demonstrating the physical principles that could be used to make a real-life tractor beam that can push, pull, and manipulate spacecraft. The device would work by pumping changing magnetic fields at a target spacecraft, exciting eddy currents in the spacecraft’s aluminum skin. These currents interact with the magnetic field from the tractor beam device, allowing it to push, pull, or rotate the target.
In the video, I generate these changing magnetic fields by moving a big rare-earth magnet around. On a spacecraft, a more likely tractor beam device would be a set of electromagnet coils. I calculated that, with reasonable power requirements, such a device could exert ion-engine-scale forces on a target several meters away. More powerful electromagnets would increase that range.
So, MAKE Magazine has this on their current cover:
That’s a Lego Mindstorms NXT computer and other Lego pieces on a spacecraft. “Cool!” my labmate and I thought upon seeing this. “Satellites made out of Legos!”
Well, it turns out that the article says this is a picture of a functional satellite prototype made out of Legos by a group at NASA’s Ames Research Center. (The same group that recently launched a spacecraft that used a cell phone for its computer system!) But, you know…why not? Why not make a satellite out of Legos? I think this would be a great idea!
What would it take?
The physical structure of a Lego-brick satellite would have to withstand the rigors of a launch into space. This involves accelerating the satellite and subjecting it to heating from friction as the rocket climbs, among other things. Space Mission Analysis and Design, Third Edition, gives the following “typical values” for acceleration and thermal requirements of satellites in a launch vehicle:
Acceleration: 5-7 g, but up to 4,000 g shocks during stage separation and other events.
Temperature: 10-35°C (but the inner wall of a Delta II fairing could get up to ~50°C).
The acceleration requirements, though that shock value sounds drastic, may not be too much of a problem. G-hardening is potentially easily accomplished by potting components in epoxy. Modern cell phones, for instance, are rated to several thousand g‘s so that they work even after you drop them. A good epoxy applied to all the joints in the Lego spacecraft structure, and probably around the whole structure after it’s completed for good measure, could go a long way toward preventing this from happening during launch!
I’m more worried about the thermal requirements. Lego bricks are made out of acrylonitrile butadiene styrene, which seems like it starts getting deformed due to heat at about 65°C. That 50°C Delta II fairing seems a bit close for comfort! Plus, the temperature of some Lego blocks sitting in direct sunlight in space could climb above this value very rapidly – and lots of transitions between daylight and shadow would cause the parts to expand and contract thermally, working the pieces apart if they aren’t well-secured with epoxy. However, the Lego satellite could be wrapped in something like aerogel or MLI blankets to mitigate the thermal challenges. Somewhat.
Another challenge is survivability of the computer system in the space radiation environment. With no atmosphere to absorb radiation, a cosmic ray could hit the spacecraft and trigger a single-event upset, or “bit flip,” that switches the value of a bit from 1 to 0 or vice-versa. This kind of thing happens to spacecraft computers all the time and corrupts data, so spacecraft computers engage in a lot of error-checking. But the same cosmic rays can also burn out a bit, so that the computer can never read its value again – or even burn out a trace in an integrated circuit so that the circuit fails! That sort of thing would definitely be a problem for a Lego spacecraft, and would shorten the life of the computer substantially unless we did some radiation hardening of the NXT. A simple way to harden it would be to encase it in some metal, but that adds mass, which is always at a premium on spacecraft. However, another strategy is to simply accept that the spacecraft will have a short life in orbit!
…Because, after all, what would be the purpose of launching a satellite made of Legos? It would be to show that commercially available materials are sufficient for at least some space applications, without the millions of dollars of investment in robustness and fault tolerance that the spacecraft industry generally demands. If the satellite’s mission can be accomplished in a few days and the lifetime of the craft is a week, then why should all of its components be certified for years of operation in orbit? Perhaps we could, instead, come up with much cheaper – or much riskier – satellite designs. We could try out new materials, new components, and new mechanisms without designing them never to fail. Instead, we accept a few failures as learning experiences, and move ahead with the designs that work.
Legos are, at least, a fun place to start. Perhaps most importantly, they are easy to get into the classroom, so that students can think about building the structure, thermal, power, electrical, and payload systems into a functional satellite – and can re-arrange or re-format those systems at will. But hey – when they’re done, why not launch?!
After buying my third computer (I have a work desktop in my office, a personal laptop, and a personal tablet), I became a big fan of Dropbox. The service is a paradigm of cloud computing: I get a folder on all my computers that acts like a normal Windows folder, but syncs up with a remote server every time a file changes. I immediately started using the service for, say, my dissertation-related files – which are now accessible from all three computers. As a plus, Dropbox downloads and keeps a local copy of all files in the folder, so my dissertation exists in four identical copies (all my computers plus the Dropbox server – which gets backed up on its own!) so I don’t ever have to worry about that work disappearing into some black hole if my hard drive crashes. And since I got a Droid Incredible, I can even access files in my Dropbox from there. Yippee!
I just came up with a devious new use of the software to add to all that. I do a lot of Matlab simulations these days, and they run fastest on my work desktop. However, these simulations take a long time, so I’d like to be able to set them up and get their results in short, intermittent checks while I’m traveling for the holidays. (Hey, I’m trying to move my research along efficiently and finish up my degree! Really!) But I haven’t been able to get Windows Remote Desktop to work – it seems that my department at Cornell keeps those ports closed and I haven’t been able to find a way around it.
So here’s what I did: I wrote a Matlab script that checks for the presence of other Matlab scripts in an input folder in my Dropbox. It then runs any scripts it finds, captures their output, and deposits that into another folder in my Dropbox. (I encapsulated the run command inside a try/catch block which also plops any errors into the output folder.) The script then deletes the file from the input folder and loops. If I put a file named “stop” in the input folder, the script cuts itself off. I think next I will add some code looking for a file named “clean” and responding to that by clearing all variables except those used in the wrapper loop.
From any of my computers, I can now write a Matlab script to do some simulations and copy it into the “input” folder. When my work desktop syncs up with Dropbox, the Matlab loop catches the script and runs it. I can check the Dropbox output folder later, again on any of my computers, to see what happened!
Maybe this little trick will be useful to someone else out there, so I decided to share it. Happy Hanukkah, grad students of the world!
One has to do with the carriers. Modern cellular networks are entirely digital. Make a call, and the phone is digitizing your voice and sending bits through a radio network. Send a text, and the phone is sending bits through a radio network. Load a web page, and the phone is receiving bits through a radio network. It makes absolutely no sense for phone companies to split their plans into “voice,” “text,” “email,” and “data” segments. Really, it’s all data. The network hardware doesn’t care whether the last byte you sent was voice or text or web, it was just a byte. Bit-bit-bit-bit-bit-bit-bit-bit. It took the same amount of bandwidth to send. The only reason phone companies structure things in this way is that they can get people to pay for more things than they otherwise would if, oh, let’s say, Congresspeople realized that it’s all just data and that the phone companies are charging customers several times for the same thing.
The other is that I think the manufacturers, carriers, marketers, and (most annoyingly) customers have forgotten where the second half of the compound word “smartphone” came from.
I have had my eye on the Droid Incredible for a little while now, so I’ve been following many smartphone reviews to see how newer phones match up, and they almost universally agree that call quality on all these devices is okay at best. Today I played around with an Incredible for a bit in a Verizon store and tried calling someone else with it, chatting for a bit, then switching phones with them and chatting some more. On both ends, the voice I heard was clearly intelligible but sounded like it lacked the full richness of tone that I would hear in normal conversation. It was a bit filtered sounding, maybe with a little bit of background. I figured it just sounded like a voice over a phone.
But then I called the other person on my old LG VX5400, a basic flip-phone that was inexpensive enough to be fully subsidized by Verizon, and repeated the chat-swap-chat sequence. There was a marked improvement in voice quality; it sounded like I heard a fuller frequency range through the connection in both cases. The other person agreed with my assessments.
This puzzles me: why would the Incredible both record and play lower-quality audio? I can think of a few of reasons that might apply:
The Droid Incredible has both an inferior speaker and inferior microphone to my old phone.
The Droid Incredible has an inferior antenna to my old phone.
The Droid Incredible uses more lossy encoding schemes to digitize and play voice audio.
I think there’s no excuse for any of these scenarios. For the first two, clearly better hardware was available to the manufacturer and clearly that hardware is within Verizon’s subsidy budget, so there’s no particular reason to cut corners and make a less capable product. In the third case, well, that’s just silly; why would the manufacturer put software in place that detracts from the performance and appeal of their product?
Obviously, smartphones are being marketed to consumers on the basis of their web access and mobile computing features, rather than their capabilities as phones. But I’m looking at upgrading my primary (and only!) phone line, so it’s important to me to be able to clearly understand others and clearly express myself in phone calls. The hit on voice quality from the Droid Incredible isn’t quite enough to outweigh the reasons I have to want its other features, and I’ve seen anecdotal evidence on the Internet that goes both ways on its call quality, but a noticeable reduction in voice quality was enough of a disappointment to make me briefly reconsider the other features. This device is supposed to be better than my flip phone; yet while it may be “smart,” it’s not better at being a phone.
Maybe this is why many of the people I know who have obtained smartphones immediately became harder to get in touch with…
Tonight, a friend of a friend came over to my apartment so we could all make chili together. During this process, we came to a point when we needed to defrost a bunch of ground beef. When I moved to the microwave to get that going, Friend-of-a-Friend says to me, “You know, you can also defrost meat in a bowl of warm water. That’s healthier for you.”
Usually the method I choose by which to defrost meat is governed by how long I feel like waiting for dinner, and how much I am thinking ahead. But I was curious about this new rationale, so I asked Friend-of-a-Friend to explain how the warm-water method is healthier than punching the “defrost” button on my microwave. “Well,” this person says, “one is cooking with radiation, and one isn’t.” Then they shrug and make a waffling gesture with their hands. “Ehhhh…” The implication was clear.
Something about this situation bugs me. Here is a person who has enough scientific knowledge to see that there is a connection between microwaves, radiation, and certain health concerns – but not enough knowledge about these things to realize that they have constructed a problem or fear that has no justification.
Microwave ovens work by bouncing radiation with a wavelength of a few centimeters or so around in a cavity. This wavelength lines up nicely with some of the vibration modes of water molecules, and the vibrations thus excited get passed along to food as heat.
Ionizing radiation can cause health risks in a number of ways, including killing things outright at high enough doses. However, the more relevant concern at the low levels of radiation found in a household appliance would be that the radiation could damage the structure of some cells’ DNA, and those cells would run amok – becoming cancer.
However, microwave radiation is non-ionizing: it is not energetic enough to do much more than excite molecular modes or maybe kick a few electrons into a valence band. It can’t cause any more direct damage to you than a walkie-talkie does by blasting you with radio waves, or a household radiator does by bathing you in infrared radiation. Furthermore, it can’t cause any damage to the DNA or cell membranes in the steak or pork chop or broccoli cut or baked potato or whatever else you put in your microwave oven. Even with ionizing radiation, irradiating the steak doesn’t make it radioactive. The result you get is a hot steak, not a carcinogen.
So, here is a person who knows that microwaves work by radiation, and that radiation causes cancer. But this person doesn’t realize that the physical mechanisms in each case are different, that the food cannot transfer the effects of radiation to you by being eaten, and that there is no syllogism here. But I wonder just how pervasive this kind of thing is: would this person be surprised if I shined a flashlight on them, and then announced – accurately and truthfully – that I was irradiating them? And how many other people are out there with similar misconceptions?
It strikes me that this sort of incomplete knowledge is a little dangerous, because it creates fear where none should exist. And there are many forces out there that would love for us to receive only partial knowledge, because then we can be driven by those constructed fears. If only more people could be motivated to pursue a fuller understanding of science…
So a while ago, I realized I had too many digital pictures on my main laptop and bought a 500 GB external hard drive. I thought I got a good deal. However, for a number of reasons, that has turned out not to be the case. Reason #1 on the list is that the hard drive was working fine one day, and then the next, I noticed that my Picasa screen saver was drawing only from pictures on the laptop’s local hard drive. The external drive no longer appeared in My Computer.
I checked the device manager and it found the external USB disk drive, and said it was working properly, but when I clicked the “populate” button under the “volumes” tab, I got information that the drive was unreadable. Okay, I thought, let’s try some other stuff in case the partition table got messed up.
With a little help, I got a bootable copy of Linux onto a USB thumb drive and brought my computer up with some Linux drive-recovery tools. (A quick note: when I entered “sudo fdisk -l” into a terminal, my external drive showed up as /dev/sdc but I got an error about there not being a valid partition table and I couldn’t force-mount the drive. [Also, if there was a “science” command in Linux, it would be an example of a command that “sudo” actually makes less useful.])
I installed and ran Testdisk. When I came to the bit where I had to select which volume to scan, I saw that /dev/sdc was listed but the reported size of the drive was about half of its actual 500 GB capacity. I scanned it anyway, and Testdisk came up with a totally blank partition structure table. No entries at all: after the table column headings, there was only a few linebreaks and then the message, “Partition sector doesn’t have the endmark 0xAA55.” I Googled around a bit for Testdisk hints, and I haven’t been able to find anyone else who gets a completely blank partition table after a Testdisk analysis. That error message turns up plenty of times, but in the posts I found, there was always some partition or other to select and the message seemed to be irrelevant. A “quick scan” looked like it was going to take my computer on the order of 100-1000 hours to complete, so I declined that option. FAIL.
I also tried PhotoRec, an image-recovery program that came with Testdisk, because hey, I wanted to recover pictures. That found nothing on the drive. On the longest run I let it perform (overnight, and incomplete – again, it estimated on the order of 1000 hours to “achievement”) it told me that my 250 GB working hard drive was full and it had to stop. When I opened up the location where PhotoRec was supposed to store recovered files, there was nothing there. And my 250 GB internal drive had totally unchanged space usage. Go figure. FAIL again.
Finally, I ripped the drive out of its packaging and discovered that the standalone external unit just consisted of a Western Digital 500 GB Caviar drive and a little control board that fed its SATA data and power ports to a USB 2 and power adapter port. Thinking that maybe the fault was with that control board – since the USB port seemed pretty flimsy to me – I yanked the drive out and connected a SATA-to-USB adapter straight to the disk drive. Same results as before. FAIL.
I can only surmise that this drive was shipped with little tiny explosive charges on each of the cylinders, or perhaps on the drive head, and one of the pictures I saved onto the drive the day before it stopped working inadvertently contained the code sequence that self-destructed the entire disk. Unless anyone else has any ideas for me…