Color of Information
We just finished working on a case, which astounded not only our experts but the experts on the other side.
The matter here in not specifically important, but it is generally important. In short, the exact case doesn’t matter, but it was valuable because it was a highly technical matter involving the output of many scientific studies. These studies had been cited in many other studies and were considered reliable information. Be it the Frye or Daubert standard – it did not matter – there were some solid published studies. What made this more interesting is that there were sold case studies that came to opposing conclusions. Thus, when damages occurred, significant damages, as a result of following one side or the other, litigation was bound to ensue.
In general, the more a paper is cited the more weight is given to the paper in academic circles. A published study or a paper that has been cited 100 times versus one that has been cited 30 times is considered a more valuable paper. It has nothing to do why the paper is cited; it just has been cited more times-adding more value. It is the greater gold nugget theory of scientific paper weighting, it matters how many gold nuggets you have not the combined weight of the gold nuggets.
As part of the battle, we (a team of professionals) began looking at all of the times each paper was cited and by whom and for what reason. For example, the paper cited a 100 times, maybe it was cited this many times because its conclusion were misinformed and 80 of the hundred cites were using that paper as what a bad example of research might be. The paper cited 30 times may then be generally considered better – this is a possibility. We also looked at citing per year since published – a new paper may not have had the time of an older paper to be cited so many times. We also looked at the names of the researchers and whose papers they were citing, as well as the names of jurors of the different papers when the jurors’ names were made available. Jurors are the professionals who opine if a paper is to be published by a journal or not.
What we found astounded all of us. Generally, 19 professionals cross cited each other, served a jurors for each other, and wrote about each other. 18 of the 19 scientists work for one of three universities and all had commercial contracts with private industry that “commercial viability” depended upon the outcome of their work.
Papers again are rated on how many people cite them. Much like any web search engine ranking for web pages. The more times a web pages is accessed, the more time a web page is cross posted or linked to higher the rankings. This is more or less exactly the same for scientific papers.
In short, these 19 individuals by cross citing and judging and working together were able to drown out other studies of greater scientific rigor.
The litigation was stayed by agreement of both parties to try to understand the weight of the impact of what had been unveiled. Experts on our side and their side were very surprised by what our research into the background of the papers showed. A cabal of researches deliberately used to drown out any opposing scientific research is very disturbing.