In Praise of Computational Complexity

Nvidia Fermi GPGPU processor
Nvidia Fermi GPGPU processor

In our fascination with tablets and smartphones, new paradigms have drastically simplified the way we interact with our daily technology. But have we forgotten how much of the growing capability is based on soaring computational capability on the back end? Just the existence of ubiquitous computing clouds, from companies such as Amazon Web Services and its competitors, has brought oceans of low cost capacity to startups. And rapidly growing analytical abilities can draw large insights from readily available data.

There may not seem to be an immediate connection here, but those of you who have been following my medical saga over the past couple of weeks know I have quickly become a bit obsessed with the benefits an explosion in computation has brought to neuroscience. It wasn’t long ago brain surgery, vastly skilled as it was, remained somewhat crude and mechanical. Surgeons depended on good hands, good eyes (well, they still do) and sometimes probes that could distinguish tissue based on the reactions of the patient during the operation.

No radiation, but no pictures. Magnetic resonance imaging (originally called nuclear magnetic resonance, but “nuclear” was dropped because it sounded scary) uses magnetism to stimulate weak radio emissions that are then read. Unlike CT scans and other x-ray based technologies, it requires no potentially harmful ionizing radiation. But it also produces no natural pictures. The term “imaging” is actually a bit of a misnomer – it depends entirely on computation. MRIs are used extensively for soft-tissue examination – they are particularly helpful for examining sports injuries – but may be at their most important for sensitive brain imaging. I have had the good fortune to go significantly beyond standard MRI exams thanks to the extraordinary capabilities of Johns Hopkins Medicine in Baltimore.

While a standard MRI does very sensitive imaging, a functional MRI (fMRI) can detect brain activity in real time. The goal in my case was to determine as precisely as possible the very fine boundary between a tumor in the left temporoparietal lobe of the brain (near the skull, just above the left ear) and normal brain tissue. I have been extremely fortunate. I’ve been on pretty high doses of corticosteroids, and while it has some nasty side effects, the brain swelling that was the original problem very quickly resolved and my language skills returned to normal (I’m hoping for a surgical miracle that will improve my poor typing skills, but I guess that’s too much to ask for.)

Exhausting experience. So the bottom line was I was in good shape for the fMRI, which turned out to be perhaps the most difficult and exhausting experience of my life. The process requires extremely fine measurement to differentiate between oxygenated and de-oxygenated blood within the brain. The Center for Functional MRI at the University of California-San Diego gives a through, if somewhat technical explanation:

The discovery that MRI could be made sensitive to brain activity, as well as brain anatomy, is only about 20 years old. The essential observation was that when neural activity increased in a particular area of the brain, the MR signal also increased by a small amount. Although this effect involves a signal change of only about 1%, it is still the basis for most of the fMRI studies done today.

In the simplest fMRI experiment a subject alternates between periods of doing a particular task and a control state, such as 30 second blocks looking at a visual stimulus alternating with 30 second blocks with eyes closed. The fMRI data is analyzed to identify brain areas in which the MR signal has a matching pattern of changes, and these areas are taken to be activated by the stimulus (in this example, the visual cortex at the back of the head).

BOLDSignalIt is not because the MR signal is directly sensitive to the neural activity. Instead, the MR signal change is an indirect effect related to the changes in blood flow that follow the changes in neural activity. The picture of what happens is somewhat subtle, and depends on two effects. The first effect is oxygen-rich blood and oxygen-poor blood have different magnetic properties related to the hemoglobin that binds oxygen in blood. This has a small effect on the MR signal, so if the blood is more oxygenated the signal is slightly stronger. The second effect relates to an unexpected physiological phenomenon. For reasons we still do not fully understand, neural activity triggers a much larger change in blood flow than in oxygen metabolism, and this leads to the blood being more oxygenated when neural activity increases. This somewhat paradoxical blood oxygenation level dependent (BOLD) effect is the basis for fMRI.

What this means in the real world is you are required to lie absolutely motionless – for nearly three hours in my case – inside this extremely uncomfortable and rather claustrophobic machine. The tests are, in their own way, fascinating. In the first series, for example, a voice reads text to you. It was difficult enough to hear, given the assortment of beeps and strange noises an MRI generates nearly all the time, but the reading began in a nonsense language that sounded vaguely but not quite Scandinavian, something like a totally deranged Swedish Chef. Then the language would abruptly shift to English (though the text was an oddly northern scene read still in a vague Scandinavian accent) and I was supposed to concentrate on what I was hearing. Then it would change to nonsense, then again, repeatedly, to that chilly English. At no time did I speak, merely thought. (I don’t think the machine is capable of telling what you are thinking, just that you are thinking, where your thoughts are occurring, and the intensity of the thought.)

Projected tests. Some tests were visual, projected on a not very good-quality screen above my head. I was shown pairs of words and was supposed to press a button – the only time I was allowed to move at all – when I identified the pairs as rhyming. This proved a lot more difficult than expected, since there were lots of near-rhymes that would depend considerably on specific, individual pronunciation. Mixed in with the words were patterns of lines. I was supposed to press the button when the patterns matched, and, again, it was made a bit tricky by close but not-quite matches.

I was shown a series of nonsense squiggles, which I was supposed to ignore, intermixed with a series of cartoony images of hearts or cars or birds, which I was supposed to concentrate on and identify. Then there were the word patterns. Something like
Ghhe fpet Smrgy gjtj mrogy Quen _______
would appear on the screen. This and a few more “sentences,” with or without blanks, would pop up and I was supposed to ignore them all. Then I would see a sentence like
There is no school on Saturday or ________
and mentally fill in the blank, followed by a few more valid sentences. Finally, I was given more squiggles intermixed with letters, and when I saw a real letter, I was supposed to think of a series a words that began with it.

None of this sounds all that difficult, but done under noisy, uncomfortable, and stressful conditions for several hours and I felt my mental acuity sapping away. I think I finally fell asleep at the very end, fortunately just for the part where they needed me to be perfectly still while a contrast dye was injected.

The data collected by an fMRI is extraordinarily complex and the machine doesn’t just spit out results (and I don’t yet know just how well I did; I will probably get a more detailed report just before the surgery next week.) In fact, a key technology that has made the fMRI possible is the use of extremely power general-purpose graphics processing units (GPGPU) from companies such as Nvida and AMD. These are super-parallel processors with vast numbers of simple but extremely fast processors that use single-instruction, multiple-data (SIMD) architecture. In this case, the goal is not traditional imaging at all but very intense statisitical processing.

There’s an awful lot going on here that’s on the cutting edge of computation. The data may be even bigger than the analytics that play a growing role in the the cloud operations behind business computing. But there is no doubt that behind the pleasantly simple facade of our app-based computing, there is an awful lot of truly deep work going on.

Published by

Steve Wildstrom

Steve Wildstrom is veteran technology reporter, writer, and analyst based in the Washington, D.C. area. He created and wrote BusinessWeek’s Technology & You column for 15 years. Since leaving BusinessWeek in the fall of 2009, he has written his own blog, Wildstrom on Tech and has contributed to corporate blogs, including those of Cisco and AMD and also consults for major technology companies.

3 thoughts on “In Praise of Computational Complexity”

  1. What a great article! I wish you the sincerest wishes of completely surmounting these issues.

    It’s actually called Nuclear Magnetic Resonance (both scary words). It involves the use of both magnetic fields (extremely powerful ones) and non-ionizing radiation. Chemists have been using the technique to determine structures of molecules for decades. When the technique evolved into three dimensional information, it became useful for imaging. The computational power is needed because the data (which is massive) gets put through fourier transforms (mathematically). This is what GPU’s (and DSP’s) excel at.

    Lest you think that this doesn’t somehow bring me back to “openness”….
    To think that these inventions and discoveries could have been done in “gilded cages” is folly. 😉

Leave a Reply

Your email address will not be published. Required fields are marked *