Representative Lamar Smith (R-TX) chairs the House Committee on Science, Space, and Technology, and he created quite a stir last April 2013 when he introduced a bill to change the way that the National Science Foundation (NSF) reviews grant applications. The NSF is an independent federal agency with an annual budget of approximately $7 billion, which funds about a quarter of all federally supported research science projects. Smith believes that there is misguided money management at the NSF and funds are wasted on frivolous research.
Smith’s proposal would require the NSF to certify that all research it funds is:
- “…in the interests of the United States to advance the national health, prosperity, or welfare, and to secure the national defense by promoting the progress of science;
- “… the finest quality, is groundbreaking, and answers questions or solves problems that are of utmost importance to society at large; and
- “…not duplicative of other research projects being funded by the Foundation or other Federal science agencies.”
What Smith’s argument really boils down to is that he feels all research should be applied research as opposed to basic research. Applied research is most often associated with the private sector, where money is spent on research programs that can develop technology and other applications based around advanced scientific studies. Academic research typically takes a much broader approach where researchers are simply extending the base of current scientific knowledge.
Having worked in both applied research and academic research roles, I can say that I prefer applied research projects, as I am constantly looking to advance current technologies and techniques to obtain the best performance. That does not mean that beneficial scientific results only come from applied research programs. Many of the best applied research initiatives have started from basic academic research programs. I also believe that academic research programs can be setup and operated as an applied research program, even though it is not in the private sector. More and more universities are looking to capitalize on applied research programs through technology licensing and startup formations.
One of the areas that Smith attacked was the NSF peer review process. There are two different areas of the peer review process: 1.) the dual peer review system for NSF grant applications, and 2.) the scientific peer review system for journal publications. Pitching biomechanics studies are typically funded through organizations such as MLB and not NSF, but they still are subjected to grant applications and subsequent scientific peer review when submitting papers for journal publication.
As a research scientist of over 20 years, I am very familiar with the peer review system. It is obviously a very important requirement for scientific advancement by maintaining standards of quality and providing credibility to the process. However, the peer review process does not validate data analysis, rather it is just a review of the submitted data. Jalees Rehman summarizes these thoughts:, “Peer reviewers do not perform any experiments to check the accuracy or replicability of the data contained in the submitted manuscript. The assessment of the validity of the results does not occur during the peer review process, but months or years later when other scientists attempt to replicate the published paper. Peer-reviewed research is understandably more rigorous than research which has not undergone any review process, but the review process is quite limited in its scope and prone to errors due to the subjective priorities and biases of editors and reviewers. Validation of the research occurs when independent scientists are able to replicate the published findings. Replication of scientific results is the gulf which separates peer review from peer validation.”
The following figure summarizes the peer review process rather succinctly.
Notice the gray clouds in the above picture detailing the peer review process. It says nothing about the accuracy of the data or if alternative methodologies are available that may provide better data outputs. As described above, that is left to the scientific community to self-regulate and validate the data analysis or provide better alternatives.