Thursday, October 29, 2009

The Great Pumpkin is out to get you...


So, I have a calendar from the Nature Conservatory in the lab, and on it, for the month of October, there is a little blurb about pumpkins being treated with pesticides that are toxic to the human nervous system.  The chemicals in question are: malathion and diazinon, and they are neurotoxins... at least to bugs, which is why they make such good pesticides.  When it comes to humans, their toxicity is debatable.  Malathion, at the low doses used in agriculture, is completely harmless to humans (though I don't suggest drinking or eating it at higher doses).  Diazonin, on the other hand, can be more toxic, though, again, you should be okay unless you are eating or drinking it directly.  That being said, you should always was any pumpkins you plan on carving or cooking with.  AND you should always wash your hands thoroughly anytime after you've been handling any pumpkins.  Also, I agree with the Nature Conservancy in their recommendation to find a local, organic grower to avoid the chemicals altogether (you can find one at http://www.localharvest.com/).  The reason being that, while these chemicals are not harmful to humans, they can certainly be harmful to other, smaller animals that might raid a farm, and not have the benefit of being able to wash with soap or cut through the tough skin of the pumpkin with a knife. Plus, we don't want to have these chemicals build up in the soil and groundwater to the point where they reach concentrations that are toxic.  You can even look at your local grocery store to see if they sell certified organic pumpkins, if they are USDA certified organic, that means no chemical pesticides were used, and you and the environment can breathe a sigh of relief. 
Of course, I can't stop there because there's a great opportunity here to talk a little neuroscience.  I said that malthion and diazonin are neurotoxins, but what are they, and how are they toxic?  Both are chemicals of a class known as organophosphates, and both are cholinesterase inhibitors, which means that they block the action of an enzyme in the nervous system called acetylcholinesterase.  In a previous post, I described how the transmission of nerve impulses occurs, describing how those impulses travel across synapses in the form of chemicals called neurotransmitters (here).  What I neglected to mention was that after the signal has been conducted, the neurotransmitters need to be removed from the synapse, or else the post-synaptic cell will be fooled into thinking that another impulse has been transmitted, and another, and another, indefinitely, until the neurotransmitter molecules are gotten rid of.  Some neurostransmitters, like serotonin, are reabsorbed by the presynaptic cell.  Antidepressants like Zoloft and Prozac work by preventing this reuptake, thus increasing the activation of cells responsive to serotonin. In the case of another neurotransmitter called acetylcholin, the molecules get broken down by the enzyme acetylcholinesterase.  Acetylcholine is primarily used by the neurons that allow your brain to control muscle movements.  Blocking acetylcholinesterase from breaking down acetylcholine in synapses can lead to continued muscle contraction.  Which may not sound so bad until you realize that this is actually the mechanism of action of another organophosphate/cholinesterase inhibitor: Sarin "Nerve" gas.  Sarin is a very potent acetylcholinesterase inhibitor that causes its victims to convulse wildly, often dying from suffocation as the diaphragm and accessory muscles involved in breathing contract uncontrollably preventing the victim from taking in sufficient air.  Now, I know that's scary, but, despite being organophosphates and cholinesterase inhibitors, malathion and diazonin are not nearly as potent as Sarin gas, and like I said, in low doses, malathion is actually quite harmless to humans (unless you eat or drink it at high concentrations).  Diazonin, while still far from lethal, is a little more caustic, and has been banned by the EPA for residential use, (though it can still be used for agricultural use).  Now, if you still want to buy some pumpkins from the store (or already have), the good news is that they were likely mainly treated with malathion, BUT, if you have some really GIANT pumpkins, they were probably treated with diazonin (or both diazonin and malathion).  In order to get giant pumpkins, you have to leave them in the field longer... the longer they're in the field, the more likely bugs are to camp out and have a good meal, thus, in order to grow large pumpkins, insecticides like diazonin must be used.  Of course, by buying these pumpkins you are supporting the widescale use of these chemicals which can cause damage to the nervous systems of lots of smaller animals like birds and rodents who might raid the pumpkin patch for a meal, or fish and amphibians who may be getting higher doses of these chemicals in the water runoff to streams and ponds.  So while the "Great Pumpkin" may not be real, the real pumpkins at your local grocer can be just as scary.  By the way, the cartoon comes from http://shinjiku.deviantart.com/ , I hope the artist doesn't mind my swiping it for this post.

Monday, October 26, 2009

We live in exciting times! Blindness cured!

Well, not exactly, but some sensationalism is warranted.  This is mostly a follow up study that has been ongoing for the past 2 years or so, but researchers from UPenn and the Children's Hospital of Philadelphia have successfully used gene therapy to reverse (for the most part) the visual impairments associated with a disease known as Leber's congenital amaurosis, or LCA.  To be clear, this is not a cure for all types of blindness (that is, blindness due to other causes), nor is it even a full cure for LCA, as none of the patients have regained completely normal vision (though they have made tremendous improvements), and the mutated gene targeted in this study only accounts for 8-16 % of all LCA cases.  Still, this is incredibly promising research, for several reasons.  First, and most obvious, despite not being a complete cure, a single injection of a gene therapy mostly reversed LCA associated blindness to the point where half of the patients are no longer legally blind and can even navigate obstacle courses in low light conditions.  The second major aspect of importance is the gene therapy itself.  In this case, an adeno-viral vector was designed to insert a functioning gene (called RPE65, or retinal pigement epithelium-specific protein that weighs 65 kilo-Daltons) in place of a mutated version that underlies LCA in a small proportion of cases.  What's amazing is not the virus mediated transfection of the gene, as this is a technique that has been around for a while, and used successfully many, many times in mice and other experimental animal models.  What's exciting about the use of this technique in humans is that the FDA is very very cautious when it comes to allowing it to be used on humans.  Mainly, they are concerned because a virus is used to carry the gene and insert it into human cells, and they are also concerned that, in the long term, messing with the DNA in the cells could increase the risk that the infected cells will become cancerous.  Of course, the viruses used in gene therapy have been engineered so that they cannot replicate, and therefore cannot cause disease (much like attenuated viruses that are used in vaccines).  But the worries about cancer can only be alleviated when we have a large enough group of patients who we can follow over time to see whether or not they develop any tumors.  As mentioned, this particular study is already 2 years in the making, and, so far, there does not appear to be any increased incidence of cancer, which is very exciting and promising (though, of course, these patients will still need to be monitored as the years go by).  Finally, the fact that these viral mediated gene therapies have been used and validated so extensively in lab animals is what made this therapy possible (and successful) in humans.  Thus, another major underpinning of this study is how it demonstrates the importance of lab animal use in biomedical research. 
I also liked how the researchers used each patient as their own control by injecting the therapy into only one eye.  In this way, they could compare their results from the treated eye to the untreated eye, and since both eyes are in the same patient, they should be as close to identical as possible including the amount of bloodflow they receive, etc.  In this way, the researchers could be sure that it was really their treatment that made the patient's eyesight better, and not a natural regression of the disease (or a miracle).  They can tell this because if one eye gets better and the other doesn't (assuming it's the treated eye that gets better) then the gene therapy appears to work.  If both eyes stay the same, or get worse at the same rate, then the therapy didn't work.  And, if the treated eye actually gets worse faster than the normal rate of degeneration due to the disease (which should be evident in the untreated eye), they will know that there is something wrong with the treatment.  Of course, this type of design makes it difficult to rule out any placebo effect (since, I'm assuming, the patients all know that at least one eye is going to be receiving the treatment), but given the remarkable results, I think we can rule out the placebo effect, or, if not, apparently, we can use it to reverse certain types of blindness.  Either way, I think we should be happy.

Friday, October 23, 2009

The cerebral cortex is not in the neck.

Here's one that simply amazes me.  Someone brought this to my attention a little while ago, but it took me some time to track down.  In the tv show "The Unit", one of the members of this secret counterterrorism group gets killed.  His name was Hector, and I guess it was a big ratings ploy in season 3 where they leaked that someone from the unit was going to be killed. Hector was shot through the neck by a sniper, and the bullet ultimately winds up being lodged in his chest cavity somehow.  Ignoring the fact that the bullet made a magical U-turn to end up where it did, I am more perplexed by the scene in the show where the medical examiner tells one of the other members of the unit that "the bullet entered the neck here, snapping the cerebral cortex.  He felt no pain."  In the scene, the guy even points to the cervical portion of the spine (the neck) as he is saying the phrase "snapping the cerebral cortex".  I hate to break it to the writers of "The Unit", but the cerebral cortex is in the brain, not in the spinal cord.  Sort of like an onion, the brain has several layers, except, unlike an onion, the brain tends to be bigger and more squishy and wrinkly.  Just like the tough outer skin on the onion, the brain has a tough protective skin called the dura mater (which basically means "one tough mother", dura is the latin word from which we derive the word durable, and mater, means mother, like in alma mater, which means "nourishing mother", a term we honor our colleges and universities with since they nourish us with scholarship).  Under the dura mater is the arachnoid layer which doesn't really resemble anything you'd see in an onion, but looks more like a collection of spiderweb-like structures (thus arachnoid) which helps to cushion your brain against collisions with the inside of your skull.  Beneath the arachnoid lies the pia mater ("soft mother"), and, together, these three layers make up the meninges (which may sound familiar if you've heard of meningitis, which is an infection characterized by the swelling of the meninges.  Meningitis can be bacterial or viral, and in some cases, usually when it is bacterial, meningitis can be fatal, which is usually when it shows up in the news).  Directly under the meninges lies the cerebral cortex.  It is the outermost layer of what we typically think of as the brain: the gray, wrinkly ball that sits in the skull.  The cerebral cortex is actually the part of the brain that gives it its gray and wrinkled appearance, and it is also an area that is very important in most of our thinking, feeling, and doing.  The image below gives you some idea of the cerebral cortex, though it might be a little tough to read, I recommend going to the site I took it from: http://www.coheadquarters.com/coOuterBrain1.htm to get a better look.  Once you get below the cerebral cortex you hit subcortical structures like the basal ganglia (of which, the striatum is the largest portion), and even the cortex itself is subdivided into many layers (6 in the human brain).  Though, unlike an onion, or the picture, most of these layers are not so easy to pull apart, and they are more distinctions that have been made by looking at the tissue under a microscope than an onion like layering.  All of that being said, I'm not saying that getting shot in the neck wouldn't kill you, especially if it severed the spinal column, and, if that were the case, then it is possible that you wouldn't be in much pain (at least you probably wouldn't feel much below the point at which the spinal cord was separated), but I think you're cerebral cortex would remain intact.

Friday, October 16, 2009

Society for Neuroscience Annual Meeting


I'm off to Chicago for the next 5-6 days for the annual meeting of the Society for Neuroscience (along with some 30,000 other neuroscientists).  I realize that I have not been posting a lot lately (I am working on a couple of papers that have been demanding most of my time) and now with the conference I will likely continue to be absent.  But, hopefully, when I come back, I will have lots of cool new stuff to talk about.

Thursday, October 15, 2009

Booth loves Bones.

So I was watching an episode of the Fox television show "Bones" a while back (I have yet to check out any of Kathy Reichs' books, she is the forensic anthropologist/author who is the inspiration for and producer of the show, but I do enjoy the show, and the overly rational, though exaggerated to the point of caricature, Temperance Brennan, a.k.a. "Bones").  Anyway, the other main character in the show is Booth (played by David Boreanaz), an ex-army sniper turned FBI agent who investigates murder cases with "Bones" and the rest of her forensics team.  Recently, Booth was diagnosed with a brain tumor and underwent surgery to have it removed, during the recovery, he was in a coma and dreamt that he was married to, and madly in love with, Bones.  Upon fully recovering, his amorous feelings followed him out of the dreamworld and into reality.  As evidence for the fact that Booth did, in fact, feel like he was in love with Bones, the FBI psychologist Dr. Sweets (played by John Francis Daley, who you may remember from the excellent, but short lived series Freaks and Geeks) shows Booth PET scans of his brain showing "activity" in the VTA, or Ventral Tegmental Area.  PET scans are somewhat similar to MRI scans, and if you follow the posts, I've described how fMRIs work in the past, but again, briefly, fMRIs measure bloodflow in the brain. When a particular area is being used, it needs more oxygen and glucose, thus, more blood.  But, here's the catch, everywhere in your brain needs oxygen and glucose, so a single MRI image wouldn't really tell you very much.  What you need to do is get a baseline, and then, while changing one thing (asking the person to think about something specific or to perform a specific task), you take another image.  When you subtract everything that is the same out of the two images, and look only at what is different, you get a functional MRI (fMRI), that is, an image showing exactly which areas of the brain seemed to be working harder while you were trying to do that specific task.  PET scans (or positron emission tomography) work in a similar fashion, except instead of measuring the iron (or oxygen) in the blood, PET scans work by measuring glucose (which has been tagged with a radioactive fluorine molecule to show up in the scan).  But the same principle holds if the PET scan is to be specific to a function, you have to have one image as a baseline, and another for the test condition.  In the case of Booth, I don't dispute that thinking about the one you love leads to increased bloodflow in the VTA and the caudate, There are several studies suggesting these 2 areas are involved in romantic love.  (One such study can be found here, or, for the popular press version, here.)  No, what I dispute is that a single PET scan image of Booth's brain would not likely show the VTA and caudate to be working overtime, unless Booth was specifically thinking about Bones while the scan was being done.  And even then, you would really only get an image like the one they showed if a baseline image had been taken.  A minor point I know, but when neuroscience comes up on a popular tv show, I can't help myself but to comment.  Actually, despite the minor error, I have to commend the producers and writers of Bones for introducing some pretty cool and pretty recent neuroscience info into the show.
And if you find it interesting too, here's a little more fodder for your brain: in many of these studies, for both the VTA and the caudate, it tends to be the right side of the brain that lights up, leading us to wonder what the VTA and the caudate in the left side of the brain are doing.  (And if you think that makes sense because the right side of the brain is supposedly the creative, artsy, and romantic side and the left brain is the rational side, you may have a point, or you may find a post about whether or not all that right brain/left brain stuff is really true sometime in the near future).  The really interesting conclusion of these studies however is that they suggest romantic love (as opposed to familial or brotherly love or platonic love, i.e. friendship) derives from motivational areas of the brain, not the ones involved in emotions like joy or sadness.  So, while relationships involve their fair share of joy and sadness, apparently falling in love is more like becoming addicted to cocaine...

By the way, if you're wondering why I didn't talk at all about Booth having visual hallucinations from a cerebellar tumor, it's because it is within the realm of possibility.  Though visual hallucinations are more commonly associated with tumors or lesions in the occipital lobe (in the back of the brain, where the visual cortex is located), they can occur in patients who have cerebellar tumors.  Though rare, cerebellar tumors can lead to visual hallucinations due to the fact that the cerebellum is also in the back of the brain, and a large growth can put pressure on the occipital lobe which is right next door.  Though, the more interesting thing is that Booth's hallucinations were actually visual and auditory, as he could hear and converse with his hallucinations, this is even more rare, as auditory processing occurs in the temporal lobe which is a little further forward in the brain, and auditory hallucinations are most commonly associated with tumors located there.  Just like the occipital lobe is heavily involved in visual processing, and the temporal is heavily involved in auditory (or hearing), the cerebellum is involved in motor movements, particularly those that require coordination (like running or playing sports).  As such, most cerebellar tumors are associated with "ataxia" which is the loss of coordinated movement in the limbs, often this is revealed by an unsteady gait or falling forward or to the side when walking, or by clumsiness when performing tasks that require coordinated movements of the hands or feet (see http://www.ncbi.nlm.nih.gov/books/bv.fcgi?rid=cmed6.section.19621 for more symptoms of brain tumors).  It is odd that Booth wouldn't show any of these symptoms, and only have the much more rare hallucinations, but it is not impossible.  Everyone is different (and every tumor is different) this is what makes medicine (and finding a cure for cancer) so difficult, and while it is unlikely that a patient with a cerebellar astrocytoma would present with only the symptoms of having visual/auditory hallucinations, it can (and sometimes does) happen.

Saturday, October 10, 2009

Communicating Science

This is a big problem in science.  There appears to be a big disconnect between the general public and science and scientists these days.  Books are being written on the subject (and many a blog) and many people are concerned that the reason you still have half of this country believing in creationism rather than accetping evolution as the origin of man (and the rest of the species with which we cohabitate on this planet), or why people don't believe that global warming is real, etc. etc.   Another example of this is NASA's latest project, where they crashed a rocket into the moon to look for ice/water under the surface.  The idea was great, and elegant in its simplicity, slam a rocket into the lunar surface, and see if it turns up some ice in all of the dust and dirt it blows up into space.  (Kind of like when you throw a rock into a pond and water splashes up, the same thing happens if you throw a rock into a pile of sand, and, when you slam a rocket into the moon).  The associated press ran an article about the experiment which, when viewed live yesterday, was, well, unimpressive to the naked eye.  The article is actually pretty good, and I myself was expecting a little more when I watched the video footage (given what I had seen in NASA's computer simulations, and its prediction of a 6-mile high plume of dust and dirt).  If you read the article, they quote heavily from Michio Kaku who is an excellent communicator of science, but there's also a comment from Alan Stern: The mission was executed for "a scientific purpose, not to put on a fireworks display for the public," said space consultant Alan Stern, a former NASA associate administrator for science.  This is a problem, I understand that rarely is science done to be a spectacle for the public, but when it is touted to the degree this mission was, and the public is obviously and readily excited to see a science experiment in progress, you shouldn't be so quick to dismiss them when they are disappointed.  You should, like Kaku did, explain why those expectations weren't met, but why there is still lots of reasons to be excited about the experiement and about what we might find out.  I mean, in this day and age of instant gratification, science is becoming less and less popular because it takes time.  Most people have no idea that it will probably be another month before we get all of the data from this mission analyzed (i.e. before we can have an answer to whether or not water is on the moon).  And even if the experiment worked perfectly, we may still not have a complete answer.  This is the nature of science.  It takes time, and it is an intensive process.  When you have the public's interest, you should maximize the opportunity and take advantage of the "teachable moment".  The public understands that NASA didn't crash a rocket into the moon simply for their viewing pleasure, but they were interested none the less.  Capitalize on that interest, explain why you are crashing a rocket into the moon, and why it's so important, and maybe, the next time a senator has to vote on how much funding NASA is going to get, he or she can vote for a funding increase, and rest assured that the taxpayers' money is going to good use.
And by the way, finding water on the moon, in the words of Michio Kaku would be more valuable than finding gold.  From the water, we could get hydrogen for rocket fuel, oxygen to breath, and of course, water to drink and grow food, making the idea of putting a livable colony on the moon a more feasible possibility (and a LOT LESS expensive).  It would also help with our basic understanding of the composition of large bodies in our solar system and universe and give us an idea of how rare (or how commonplace) water is in the universe.  Important stuff.

Thursday, October 8, 2009

More support for glia, less for the neuron doctrine...

This story is a little bit old, but just another interesting example showing how important glia are, not only for neuronal function at the cellular level, but, they are directly involved in important behaviors, like avoiding smells that could mean danger, or seeking out those that mean food.  Anyway, here's the story: http://www.sciencedaily.com/releases/2008/10/081030144624.htm I would write more, and I have lots of other topics/posts to get up, but I am really busy with a couple manuscripts and research (and the SFN conference is coming up) so this will have to suffice for now.

Sunday, October 4, 2009

Lots of people believe in neuromyths...

Even teachers undergoing postgraduate training...
http://www.bristol.ac.uk/news/2009/6517.html
my favorite is the one who believes that eating walnuts helps to "moisturize the brain".  Unfortunately, I don't think that one warrants its own post.

Saturday, October 3, 2009

More of The Lost Symbol, Noetic Sciences, and the Pineal Gland

So, a little while back, I posted a bit on the so-called "Noetic Sciences" which feature in the new Dan Brown novel The Lost Symbol.  Briefly, the "Noetic Sciences" refers to research conducted by a group called the Institute for Noetic Sciences (IONS) which was founded by astronaut Edgar Mitchell to study parapsychology (think Ghostbusters).  While topics like "healing from a distance (with thought) and other "mind over matter" type ideas can be tested experimentally, there is no evidence suggesting that these things are possible.  Thus, institutes like IONS, which claim to be making progress on these fronts, get relegated to the categories of pseudoscience and quackery.
Anyway, my point here is not to beat up on IONS because I don't really know how much real science and how much pseudoscience is going on there. Rather I would like to try and correct some of the neuroscience myths that showed up as I was finishing The Lost Symbol.
One in particular I want to take issue with is a line toward the end of the book when the character Katherine Solomon describes the pineal gland:
“Perhaps you’ve heard… about the brain scans taken of yogis while they meditate? The human brain, in advanced states of focus, will physically create a waxlike substance from the pineal gland. This brain secretion is unlike anything else in the body. It has an incredible healing effect, can literally regenerate cells, and may be one of the reasons yogis live so long. This is real science, Robert. This substance has inconceivable properties and can be created only by a mind that has been highly tuned to a deeply focused state.”
Now, I know a lot of people have trouble realizing that Dan Brown writes FICTION, and so I think its important to relate that this claim about the pineal gland is false.  First, we know what the pineal body (aka pineal gland) is, and what it does.  And what it secretes is not some mysterious waxy substance that can magically heal people, but a hormone called melatonin, which, helps to regulate our circadian rhythms, otherwise known as our sleep/wake cycle.  Melatonin is not, as far as I know, "waxlike", nor can it cure leprosy (or anything else), nor is it "unlike anything else in the body".  What melatonin can do is tell you when you should be tired and go to sleep (in fact, for a long time, it was thought that the sleepiness you feel after thanksgiving dinner was caused by a large amount of tryptophan in the turkey, where tryptophan is an amino acid that can actually be converted to melatonin in the body.  Although there is no evidence to support that tryptophan causes sleepiness (I will likely devote a whole post to this when we get closer to Thanksgiving).  Also, as you can see in the figure below, serotonin is the intermediate between tryptophan and melatonin, and, as you can see from the chemical diagrams serotonin is not so different from melatonin, though it does serve different functions (most notably as a neurotransmitter, of which, having too little seems to be a major factor in depression and other psychological disorders).  But, different function aside, I don't know if I would go so far as to say nothing else like melatonin is made anywhere else in the body.

Additionally, its important to consider that we first discovered melatonin in the brains of lab animals.  In order to get melatonin from humans samples would have to be taken from the brains of  human patients who have had to have brain tissue removed, or after they have passed away, or isolated from blood samples.  The Dan Brown book claims that scientists isolated this waxy substance from the pineal from the brains of yogis just after they finished meditating.  If this were true, the scientists would have had to anesthetize the yogis and then perform highly risky and invasive neurosurgery to get to the pineal and extract this substance.  Not only would this have been completely unethical and illegal, but the surgery would either have to have been performed twice on each subject (once before meditating and once after) or on twice as many yogis (half who meditated and half who didn't).  Needless to say, no study like this has been done, and if this mystical substance does refer to melatonin, it is not really waxy, as it dissolves easily in blood, and is refined as a white powder when it is isolated.

So where did Dan Brown get this idea?  My only guess is that there are a couple of studies that suggest meditation can increase the body's production of melatonin, which we assume is coming from the pineal, but could just as easily be the result of the melatonin that is already present in the blood not being broken down by the kidneys (there's no way to know because in studies like these, the melatonin levels are measured in the circulation).  However, given the prevalence of melatonin on the supplement shelves and the number of studies showing that additional melatonin doesn't have any dramatic or convincing healing effects (or even helpful effects with insomnia and other sleeping disorders unless you already have really low levels of melatonin), I would hesitate to call it a "waxlike substance with healing properties".

Thursday, October 1, 2009

It was bound to happen sooner or later...

So, anyone who knows me well, knows that my biggest pet peeve is creationism and its proponents. And by creationism, I am obviously also including its most recently popularized incarnation: so called "intelligent design", or "ID".  Sadly, this "movement" is being purported by people who are obviously and knowingly misconstruing evidence, or presenting isolated examples for which there is yet complete or convincing evidence, as if this could be a valid argument for throwing out the entire theory of Evolution by natural selection.  They ignore the awesome explanatory power of this theory which is at the very heart of ALL BIOLOGY, and they do it all as part of a malicious smear campaign aimed at convincing the uneducated masses (uneducated, at least, in terms of a good understanding of evolution by natural selection) that there is some controversy among scientists about evolution and that there is a paucity of data to support Darwin's theory.  And the reasons they do this have nothing to do with improving scientific research or education, it is all because of some misplaced notion that acceptance of the evidence for evolution by natural selection somehow results in the moral downfall of humankind (despite the abundance of research correlating increased crime, drug use, teen pregnancy, and abortion with increased religiosity, NOT with the acceptance of Darwin's theory. Don't believe me? Find one example here) .  Anyway, the interesting bit of irony here is that despite ID being the one thing in this world that really seems to get under my skin, I work down the hall from its leading proponent.  As such, our department has a disclaimer on our website to clarify that there is no controversy among biologists, and certainly not among the biologists at Lehigh University...
The faculty in the Department of Biological Sciences is committed to the highest standards of scientific integrity and academic function. This commitment carries with it unwavering support for academic freedom and the free exchange of ideas. It also demands the utmost respect for the scientific method, integrity in the conduct of research, and recognition that the validity of any scientific model comes only as a result of rational hypothesis testing, sound experimentation, and findings that can be replicated by others.
The department faculty, then, are unequivocal in their support of evolutionary theory, which has its roots in the seminal work of Charles Darwin and has been supported by findings accumulated over 140 years. The sole dissenter from this position, Prof. Michael Behe, is a well-known proponent of "intelligent design." While we respect Prof. Behe's right to express his views, they are his alone and are in no way endorsed by the department. It is our collective position that intelligent design has no basis in science, has not been tested experimentally, and should not be regarded as scientific.
I support this sentiment wholeheartedly, and, to the best of my knowledge, so does everyone else in the department, except for Dr. Behe.  However, recently the chair of our department received a letter from someone who wished to defend Dr. Behe's honor, and he cc'd that letter to everyone in the department.  Now, the dilemma is, do I even waste my time countering all the lies and misinformation this letter contains, or ignore it because I should be spending my time doing something more productive... say watching tv, or sleeping, or mindlessly beating my head against the wall... but then I can't resist... and I guess that's why its a pet peeve.
So, if you want to read all of this drivel in its entirety, you can get a pdf here.  I'm just going to pick out some of the more annoying points, and then send this all on to PZ Myers to see if he wants to waste any more time picking apart this ID-iocy.

My first point is this, the chair of our department is Dr. Murray Itzkowitz, not Itzkovitz, if you are going to write a letter to someone and accuse them of not knowing their field of study (at which they have worked for decades and published numerous peer reviewed papers), the least you could do is look up the proper spelling of his last name, which can be found on the same webpage that you are critiquing.  Just a thought.

Point 2: The author of this letter claims:
"Dr. Itzkovitz, suppose I told you that I have personal knowledge that at least two members of your department (apart from Dr. Behe) strongly disagree with this position and that they have already privately communicated as much to several individuals of whom I am personally aware.  Furthermore, suppose I told you that these brilliant scientists are so afraid of losing their faculty positions at Lehigh that they have chosen to keep their professional opinions private until such time as they have attained tenure."
Now, I added the boldface emphases to point out that this person claims there are at least two untenured faculty in our department who support ID, and that THIS IS A BOLDFACED LIE.  There are currently only 4 untenured faculty in our department.  I don't just claim to know people that have talked to these people, I know the individuals themselves, very well, as our department is not very big.  I have worked closely with these faculty members, taught classes with them, conducted research projects with them, shared meals and more than a few drinks with them, and... they are my friends!  Of the 4 of them I can tell you exactly how many support the notion of ID.  The answer is zero.

But then I guess I shouldn't be surprised at lies and misinformation from someone trying to support creationism.. Uh... I mean "intelligent design".  Let's move on to the arguments he puts forth against evolution by natural selection.  The author of the letter states:
"Sir, until you or your department faculty can fully and empirically explain from the perspective of either naturalistic materialism or Darwinian gradualism... the following phenomena, you have no legitimate basis, under the Scientific Method, for making such broad-brushed statements regarding either Dr. Behe or intelligent design."
 Before I get to "the following phenomena", let me point out that ID has nothing to do with the scientific method, as the principle tenets of the scientific method are 1. you make some observation(s), 2. you formulate a hypothesis, and 3. you design an experiment to test your hypothesis (and by test, I mean you try to falsify it).  Its on this 3rd point that ID fails, as just saying "some intelligent agent did it" is NOT a TESTABLE hypothesis, at least not when the intelligent agent is a supernatural agent (if it were J. Craig Venter, or aliens, then, maybe you'd have something).

Anyway, the first argument is:
"The Cambrian Explosion": the sudden appearance in the fossil record (with no prior transitional forms, as required by Darwin) of over 50 body plans ("phyla") in the geological instant of just 5-10 million years...
I take some solace in the fact that at least this guy believes the earth is older than 6000 years, and that he views 5-10 million years as merely an instant in the geological age of the earth.  BUT, this is actually a pretty common creationist talking point... they point to the "explosion" of biodiversity that took place during the Cambrian period and incorrectly claim that there's no fossil evidence for organisms existing before this time that could explain all these new species.  however, there is actually evidence for forms existing prior to the Cambrian, and those forms share similarities to the species that have been identified as arising during the Cambrian. I will defer to PZ Myers' blog Pharyngula, where there is an nice little post on exactly this topic.
Our guy also argues that since the explosion, we have only lost phyla, not added them, so, he feels, Darwin's tree must be wrong because its upside-down.  First, this is misleading, because phyla are groupings of species based on the fact that they share very ancient body plans.  Numerous new genera and species have evolved since the cambrian (humans and all other modern mammals), but the development of a whole new phyla would be crazy at this point in history as selection works on what's available, as the common body plans of the established phyla are what's been available for hundreds of millions of years, its likely that we will not see any new phyla, unless we all die out and the whole process basically starts over.  Then, after a couple billion years, we might get some new phyla.  But the point is, Darwin's tree has gained many branches, it just hasn't gained any new trunks.  Second, this argument also betrays a common misunderstanding about evolution by natural selection.  The idea that selection is directional, or purposeful, is just wrong.  While Darwin's tree branched out because he was formulating how we got so many different species over time, starting from the origin of life which was likely a single species, or at least a small number of species, the loss of species has always been part of the equation, and we know from the fossil record that there are several periods in time when resources became scarce, and many species died out (like the Permian extinction) which would seem, at least to this guy, like the tree turned upside down, but then righted itself again after that.  In recent history, we've again seen a pruning of the tree, this time because of human population growth, exploitation of resources, and pollution, which have caused the extinction of many species.  If these practices continue, Darwin's tree may well start "going upside down" again.

Argument #2: "The total absence of transitional forms in the fossil record"

Really?  This is just silly.  First of all, if you understand that evolution only ends for a population when they go extinct, then you realize that ALMOST ALL FOSSILS are transitional fossils.  Our pen-pal obviously accepts that there are numerous examples of fossils dating to the Cambrian period, to quote him: "These include representatives of every single animal phylum now in existence on earth, including the Chordates." So he believes that fossils showing the evolution of a spinal cord exist, but they aren't transitional forms?  We have fossils of fish that were transitional to land dwelling tetrapods, dinosaurs evolving from reptiles into birds, evolving feathers and wings, even our own ape-like ancestors whose fossils reveal that they started walking on all fours less and less about 3.4 million years ago (or 4.4 million years ago based on this new find).  The creationists know this, but they like to put their hands over their ears and scream: "LALALALALA, I can't hear you!"
But let's flip this argument around.  What's really interesting about the fossil record is not that creationists pretend it lacks transitional forms, what's interesting is that on all of the fossil hunting expeditions, or with all of the accidental finds, with the multitude of fossils that have been found, characterized, and catalogued, there have NEVER been ANY discoveries of modern forms found in antiquity.  For example, if you want to find evidence against the current theory of evolution by natural selection, you should look for a modern human skeleton that dates to more than 4 million years ago, or even just 200,000 years ago, or a modern horse skeleton from 5 million years ago, etc.  If anomalies like these existed, then we would have to start figuring out why they were there, and our modern explanation would need to be revised and updated to explain these finds.  And yet, there have been no such anachronistic fossils.  The fossil record confirms evolutionary theory.

Argument #3: "The assembly - and operation of - the bacterial flagellum: Arguably the best evidence (some say "poster child"?) for intelligent design..."

Oh, good grief!  Not this tired old argument.  First of all, let's assume he's right (which he is NOT) that we lacked the evidence to explain the assembly and operation of the flagellum... if that were the case, it would still NOT be evidence FOR intelligent design, it would simply mean that we don't yet have the evidence to explain how this structure arose.  You can't set up 2 ideas: A and B, then point to an instance where there is not enough evidence to support A, and immediately claim that B must be the correct explanation... and not just the correct explanation for this one instance, but now to be broadly applied to all instances.... you're forgetting about options C, D, and E...this is just faulty logic on top of bad (or, rather, complete lack of) science).  A good scientist would generate several hypotheses (from which s/he could make testable predictions) about the origins of the flagellum, while a poor scientist (or non-scientist) might say "God did it" which is not only untestable (and therefore unscientific), but it also stifles the curiosity that would lead us to an actual explanation of how things happened.  Perhaps 2 or more bacteria fused in a symbiotic relationship bringing together various parts of this cellular motor, perhaps, over millions of years, proteins that were serving other functions in the cell were co-opted by mutations that brought them together to serve the function of motility, or perhaps it was magic... the bacteria flowed over the philosopher's stone and a flagellum magically appeared... the first two ideas are scientific hypotheses and can be tested, the last explanation is completely ridiculous, and outside the realm of science (and no different from intelligent design).  But don't take my word for it, watch this clip from the excellent NOVA special: Judgement Day: Intelligent Design on Trial
OR, you can watch this clip where Ken Miller explains the failings of the flagellum as an example of "irreducible complexity"



Argument #4: The genetic code...
So, this guy mentions wanting to quote John McEnroe, and all I can say to that is, right back at you: "Really?!?!? You cannot be serious!!!!"  Before I get to the incredibly weak argument that he uses here, let me just say that if anything, the genetic code is one of the biggest pieces of evidence (and tools used) IN SUPPORT OF evolutionary theory.  The fact that ALL living things on this earth share DNA (and RNA) as the basis of protein expression and heredity in our cells suggests that we ALL share a common origin.  Also, sequencing the genes of numerous species of organisms has led to only minor changes in phylogenies, but ultimately, sequence comparisons have confirmed the phylogenetic relationships that morphological characters gave us (i.e. we suspected that as humans, we share our most recent common ancestors with other great apes, like chimpanzees... comparing our genomes we see that we share roughly 99% of our genetic material with chimps, whereas, when we look at a mouse or a fish, animals with whom we would have to go farther back in time to find a common ancestor, there are a lot less similarities in the sequences).  And yet there are still enough similarities among even distantly related species that we have been able to study the genes involved in the development of fruit flies and use that information to discover the homologous genes that are critical to the development of humans and other mammals (because selection can only work with what's present, these genes change and evolved in our ancestors, but not so much as we can't still recognize them).  On top of that we are now able to use the tools of genetics to attempt even more elegant studies into the nature of evolution (like this story I talked about here) where we can actually attempt to reverse engineer the changes that took place in individual proteins as they evolved over millions of years (i.e we can empirically show how small numbers of mutations, or even single mutations, can dramatically change the functions of proteins, allowing them to be selected for or against over time, keep this in mind, it will come up again). 
Of course, the argument that the genetic code bespeaks design and not evolution is not new either, and here is a post by PZ Myers debunking a similar claim. 
Putting all of that aside, what's this guy's argument?
Apparently, Bill Gates (among others) thinks that the genetic code looks like a "computer program" or like "advanced software"
Wow!  That's it?  DNA has the appearance of design?  And you can make arguments by analogy?  Whoa! Slow down, I'm so used to having to counter arguments that actually have EVIDENCE behind them, and not just weak analogies, I'm going to have to actually... oh, wait, I can counter these too...
Okay, so again, this shows why the scientific method is so powerful, because lots of things appear to be one thing, but then careful experimentation and hypothesis testing reveals that our intuitive interpretations of these "appearances" turn out to be faulty.  For example, time appears to be constant.  My wife and I go to work each day, and despite my not seeing her, when she tells me she will be home around 6'o'clock, I know I can check my watch, 30 miles away at my job, and know when she is getting home.  If I call home, she will likely be there.  But Einstein (and others, who helped to test his ideas) showed that time is not constant, but rather variable depending on how fast you are moving with regard to everything and everyone else (thus time is relative, hence: theory of relativity).  The same is true for many instances of supposed design, not just in the fact that many of these examples have been shown to be explained by evolutionary theory, but also because appearances are RELATIVE.  To explain, I will use many of the same arguments by analogy that our letter writer uses.  He writes:
"When we see 'John loves Mary' written in the sand, or hieroglyphics etched into an Egyptian cartouche, or the faces of four American Presidents sculpted into Mount Rushmore, or stone trilithons circumscribing an English meadow, we don't say 'look what the wind did!' or 'isn't that an interesting erosion pattern!' or 'what incredible glaciation produced that curious circle of erratics on Salisbury plain!' We infer design."
So, first the John loves Mary argument.  This is what I mean by the appearance of design being relative.  As a thought experiment, assume aliens from a distant galaxy happened upon a beach here on earth and saw "John loves Mary" written in the sand they would likely not infer intelligent design if their forms of written communication were so vastly different from ours, or if characters like our letters were common on their planet as nothing more than geological formations, kind of like this one.

  They would see that writing as unintelligible, or ignore it completely.  They might, however, easily mistake a completely flat section of beach as obvious evidence for design because on their planet, written language is not about letters, but about pixels of varying lightness and darkness, where, when the aliens look at the sand, they see each grain as a pixel of varying ability to reflect light.  Putting enough of these grains together probably reveals the alien equivalent of Shakespeare, but if any of us happened upon this alien, we would correct him immediately, and explain that wind and water arranged those grains of sand in a random fashion that just so happened to appear designed... to aliens.
Mount Rushmore is another one of my favorite examples because it always reminds me of "the Old Man in the Mountain", a famous landmark in New Hampshire (that, sadly has continued to erode beyond recognition, but here it is in its former glory).

It's only natural to assume design here too, but we know that this face was sculpted by wind and erosion (not some crazed Stan Lee fan trying to recreate the visage of Ben Grimm, aka the "Thing").  The point is, the human brain has actually evolved to look for and see design even when its not there, particularly when it comes to faces (which is why we also see a bunch of shadows on the lunar surface as "the man in the moon", or the virgin Mary in a bird turd), but in order for us to truly know if something was designed (by man), we need evidence.  For example, we have numerous historical documents that tell us that Mount Rushmore was sculpted between the years of 1927 and 1941, approved by Congress under President Calvin Coolidge, and based on the design of sculptor Gutzon Borglum.  In the case of Stonehenge, we also have evidence to suggest that it was put there by humans.  For example, the type of rock used is not found anywhere near stonehenge, suggesting that something or someone must have moved these huge rocks a distance of over one hundred miles (though that could have been a glacier!).  Next, when you look at the site, there is evidence of human burials, suggesting that humans were there and using the site, and more convincingly, many of the tools used to shape and erect the stones have been found in digs around the site.  These same bits of evidence are what's missing in the argument for ID.  For example, where's the evidence for the tools the designer used to create DNA or the flagellum?  Did the designer drop a bit of his or her balogna sandwich in the bacterial motor?  Or is there any other evidence that s/he was there?  No.  Not only is there no evidence FOR design in biological systems, but if you want to claim that the designer is a supernatural being, then there will never be a way to even look for such evidence.

The next argument is entitled "The failure of the 'primary axiom'", which is another tired old creationist argument that basically states that mutations cannot generate any new information.  This guy seems to be getting a little more specific and claim that mutations cannot generate new proteins (they only seem to be able to damage or detract from what's already there).  His conclusion is that if mutations cannot generate new proteins, then there cannot be any new and beneficial proteins for nature to select for, and therefore new features and thus new species (including man) couldn't have evolved.  Even Dr. Behe doesn't believe this garbage!  In Dr. Behe's latest book, he freely admits to accepting the evidence for macroevolution, including the origin of man from ancestors we share in common with the other great apes.  But let's take this argument apart just like the rest...First, single letter mutations in the genome are not the only way to introduce new information into an organism or species, there are numerous examples of how viruses and other pathogens have inserted strings of DNA into the genomes of their hosts (the capability that virus' have to do this has actually led to our ability to experimentally alter the genomes of lab animals, like in these recently developed transgenic songbirds).  Additionally, when the replication of DNA goes haywire you can get duplications of segments of genes or even whole genes that get added into the genome.  Lots of genes that are important for development actually came about from multiple duplication events.  Some of these genes have very similar functions because they haven't been selected for change but rather it was more beneficial to have a redundancy of function to protect against deleterious mutations (as seems to be the case with some Bone Morphogenetic Proteins, like BMPs 2 and 4 which are very similar), and then there are others that evolved to carry out many different functions like the numerous Hox genes that are involved in the establishment different segments of the body plan, and subsequently, numerous different tissues, organs, and cell types.  On top of that, new proteins don't need to be manufactured to elicit great changes at the phenotypic level.  Many aspects of where and how much protein gets made from a gene are also encoded in the DNA sequence.  For example a mutation in the promoter sequence could change which cells express that gene, or when they express it, or how much of it they express.  If this happens to a gene that is critical during development, dramatic changes can occur.  For example, scientists recently uncovered a step taken by the common ancestor of mice and bats toward the longer limbs needed for bats to develop wings.  By altering a part of a mouse gene known as an enhancer, they made the limbs of the mice longer, and more like bats' limbs, during development.  (An enhancer is a part of the sequence that doesn't even make it into the protein, but it enhances the level of transcription, thereby leading to more of the protein being expressed)  Now, if we ignore all that, and assume that mutations are the only way to achieve change, there are actually, still, plenty of examples of new and different proteins coming about from mutations.  Most of the ones we know about lead to some form of disorder or disease, but that is because medical research gets way more funding than evolutionary biology research, which is to say there are likely just as many mutations that have yielded "beneficial" new proteins, but we haven't investigated them because we have an "if it ain't broke, don't fix it" mentality when it comes to research in the life sciences.  However, if you want an example of a mutation giving rise to a new protein that is not deleterious, just look at our own immune systems, where genes for antibodies are constantly being mutated to give rise to antibodies that can recognize thousands of different antigens.  In fact, the ability of a single mutation to yield new and possibly beneficial proteins has led numerous chemical and bioengineers to artificially induce (relatively few) mutations and select for proteins (based on phenotypes, like the ability to degrade or metabolize a new substrate) that may have therapeutic or commercially beneficial properties.  Essentially, they are recreating the evolutionary process, at the level of individual proteins, in the lab   (here's a review, and here's another article for example, and another one) (if you want even more examples you can go here, or here).  So, obviously, it is not impossible for one or a few mutations to result in phenotypes at the protein level that can be selected for or against, and we have many examples of where this has happened in our own DNA.

The final argument is possibly my favorite because it illustrates so clearly what is wrong with ID (in a word: fatalism).  Also, it is basically the same old creationist argument of "the eye is too complex to have evolved..." EXCEPT, since the evolution of the eye has been so well described (or here), the ID-ots must have now moved on to the heart...  
"The Heart of the Matter: one thing that has always perplexed me from an evolutionary standpoint, Dr. Itzkovitz, has been the heart... Essentially what we have here is a pump... a machine... complete with chambers, muscle, blood vessels, valves, and an intricate arrangement of electrically conductive tissue that all work together, with incredible rhythm and harmony, to pump deoxygenated blood (including red blood cells containing hemoglobin) to the lungs (where it is oxygenated), receive it back again, and then pump it to the rest of the body, and then back again... My dilemma is this: Which evolved first, the chambers or the valves? the cardiac muscle or the neural circuitry which excites it?  The blood vessels or the blood itself? the red blood cells or the hemoglobin they carry?  the lungs or the heart itself?
Okay, before I get into it... here's a blurb about the evolution of the 4 chambered heart, and before that, the two chambered heart, which evolved from a muscular tube like the one we find in some fish, and here's an abstract of a review on the evolution of blood itself (sorry, but I couldn't find a freely accessible full article), and here's an article on the evolution of hemoglobin (and its co-opted use for carrying around oxygen to be used in metabolism).
I could stop there, and let you just read about it on your own, but this argument is just such a great opportunity to demonstrate the difference between science and non-science.
How did the heart come about?  How did blood come about?  And hemoglobin?  These are all excellent questions!  I am completely fascinated!  Where did blood cells come from?  How did they first start to work?  I, as a scientist, would try to start figuring this out by making predictions and looking for evidence to either support or refute those predictions.  For example, I might say, well, why don't I look at other aerobic organisms and see what their circulatory systems look like, I would predict that some of them have retained more "primitive" hearts and circulatory systems that could give me clues to what the predecessor to our own system may have looked like.  I might also predict that organisms that are still capable of anaerobic metabolism as well as aerobic metabolism (that is, they can survive with or without oxygen) use something like hemoglobin for something other than their aerobic metabolism, which might provide another clue.  I might look at the heart and say that it looks like modified smooth muscle (which is innervated in the digestive system to move food along the GI tract) and see if I could find an example of an organism that moves oxygen along with its food, or perhaps uses smooth muscle and a similar peristaltic motion to perfuse itself with oxygen. I can go on and on, generating testable predictions that, upon investigation, would tell us more about the world around us, the other organisms we share it with, and our own history, the history of how we evolved, which could then reveal more about how our circulatory system works, or why it works the way it does, which could lead to us being able to regenerate parts of the heart after a heart attack (which some species of fish can do despite their having more "primitive" hearts), or figure out how to make blood cells in people with various forms of anemia more effective at carrying oxygen, etc.  The point is, the scientific method contributes a greater knowledge and understanding of our natural world, and contributes to our advancement as a society and as a species, it allows us to do more and greater things with our knowledge.  What does the ID "method" do?  Well you can see it all over this section of the letter: "do you see my quandary here?"  "please help me out here..."  this is a "real chicken-or -the-egg kind of thing"... "that has always perplexed me"  The reason it is so perplexing to this guy is because he already has a fixed idea in his head... he believes that the heart must have been formed whole, just popped into existence by "the designer", and because he is starting with this faulty conclusion and trying to make sense of it, he can't.  He throws his hands up in the air, says, "I give up. Somebody way smarter than me must have put it together and slapped it into Adam's chest."  BUT, if this guy could work from the evidence up, and see where it takes him, he would find that there are actually vEry good explanations for how the circulatory system could have arisen gradually.  Do we have ALL of the pieces of evidence yet?  No.  But, by using the scientific method, we will continue to collect more and more evidence, and that will lead us to a better understanding of the circulatory system.  With ID, we will be left with nothing, there will be no need to continue investigating anything because we can just say: "The designer did it.  And (he or) she is so smart, I can't possibly figure out what she (or he) did, so why bother.  I'm gonna go abuse prescription drugs and play video games."  Okay, maybe that last part won't happen, but, trust me, thinking you have all of the answers is a dangerous place to work from.  That's why science works from the question to find each new answer, and from each answer comes up with new questions.  Scientists don't start with the answers, they constantly have to seek them out... One of my favorite comedians, Dara O'Briain, says it best: "Science knows it doesn't have all the answers... otherwise, it'd stop."

Anyway, those were the main arguments, the guy goes on to talk about how consensus science is rarely right, but like all of these nutjobs, he points to ancient times when the consensus was a flat earth or geocentric universe, he doesn't point to the fact that pretty much everyone accepts the theory of gravitation (gravity), or the germ theory of disease, or even the theory of relativity.  I mean these theories must be equally flawed since they are the consensus, right?  He also recommends that both of Behe's books be made required reading material for our biology and biochemistry undergrads... pardon the expression, but it'll be a cold day in hell before that happens.  He also doesn't understand why we claim that Behe's books are not science on the website, and I'll try not to be too repetitive here, but science is a METHOD of invesitigation, a METHOD for understanding the natural world (hence: scientific method).  Making things SOUND or SEEM scientific by talking about the circulatory system or other biological processes that have been discovered by using the scientific method, that is NOT science.  Just think about all of the herbal supplements, diet products, and other quackery that's out there, that SEEMS scientific, but doesn't work for anything except making money for the snake oil salesmen.  If it does not involve hypothesis testing, it is not science.

He ends the letter by insulting all of us who accept the evidence for evolution by natural selection as being lazy (HA!), cowardly (I'm only afraid when my dentist says "this may take a while"), arrogant (wait, who's claiming to have absolute truth here?), too ideologically committed (remember that time I was telling you what irony is?), or too dishonest to admit what he/she really suspects... WOW!  hmmm... I'll leave you to think on that one.