“Sharing Clinical Research Data: Workshop Summary (2013) “

N.A.P.
-“Changing the Culture of Research”
http://www.nap.edu/openbook.php?record_id=18267&page=57
-“Final Reflections on Sharing Clinical Research Data”
http://www.nap.edu/openbook.php?record_id=18267&page=73
*note 10 percent of the US population is in some sort of clincal trial

At the same time keep in mind the RTC
culture that places perhaps to much of a premium on clinical trials.
“Witty A: Report to the President”
David Healy-May 7, 2013
http://davidhealy.org/witty-a-report-to-the-president/
“While accepting Andrew Witty’s suggestion that he and other employees of GSK are just like us, few of us can imagine being party to behaviours that warrant a $3 Billion fine. But from this study it seems we are more like him than he is like what we might like to think of ourselves.

GSK’s proposals for transparency need to be read against this background – is the company really offering to do the equivalent of remove their drugs from competition? Or are GSK just a little bit smarter than the competition?

When faced with the growing Fascism in healthcare, we can retreat to the wilder shores of conspiracy theory and claim the problems are down to the Germans, the Jews, or a Socialist cabal. Or we can attribute the problems to some evil people somewhere in pharmaceutical companies who break laws.

But if we accept that the Board of GSK and other companies are populated with people just like you and me, who are perhaps even less likely to break the law than you or I, the $3 Billion fine for GSK notwithstanding, then the problem must stem from and the remedy lie in the system.”

This would most certainly include the stakeholders:
http://mentalillnesspolicy.org/

and the gamble of the RCT culture:
http://www.hhs.gov/ohrp/humansubjects/guidance/45cfr46.html

Deception in Psychological Research
(wiki):
Psychological research often needs to deceive the subjects as to its actual purpose. The rationale for such deception is that humans are sensitive to how they appear to others (and to themselves) and this self-consciousness might interfere with or distort from how they actually behave outside of a research context (where they would not feel they were being scrutinized). For example, if a psychologist is interested in learning the conditions under which students cheat on tests, directly asking them, “how often do you cheat?,” might result in a high percent of “socially desirable” answers and the researcher would in any case be unable to verify the accuracy of these responses. In general, then, when it is unfeasible or naive to simply ask people directly why or how often they do what they do, researchers turn to the use of deception to distract their participants from the true behavior of interest. So, for example, in a study of cheating, the participants may be told that the study has to do with how intuitive they are. During the process they might be given the opportunity to look at (secretly, they think) another participant’s [presumably highly intuitively correct] answers before handing in their own. At the conclusion of this or any research involving deception, all participants must be told of the true nature of the study and why deception was necessary (this is called debriefing). Moreover, it is customary to offer to provide a summary of the results to all participants at the conclusion of the research.

Though commonly used and allowed by the ethical guidelines of the American Psychological Association, there has been debate about whether or not the use of deception should be permitted in psychological research experiments.

Those against deception object to the ethical and methodological issues involved in its use. Dresser (1981) notes that, ethically, researchers are only to use subjects in an experiment after the subject has given informed consent. However, because of its very nature, a researcher conducting a deception experiment cannot reveal its true purpose to the subject, thereby making any consent given by a subject misinformed (p. 3). Baumrind (1964), criticizing the use of deception in the Milgram (1963) obedience experiment, argues that deception experiments inappropriately take advantage of the implicit trust and obedience given by the subject when the subject volunteers to participate (p. 421).

From a practical perspective, there are also methodological objections to deception. Ortmann and Hertwig (1998) note that “deception can strongly affect the reputation of individual labs and the profession, thus contaminating the participant pool” (p. 806). If the subjects in the experiment are suspicious of the researcher, they are unlikely to behave as they normally would, and the researcher’s control of the experiment is then compromised (p. 807).

Those who do not object to the use of deception note that there is always a constant struggle in balancing “the need for conducting research that may solve social problems and the necessity for preserving the dignity and rights of the research participant” (Christensen, 1988, p. 670). They also note that, in some cases, using deception is the only way to obtain certain kinds of information, and that prohibiting all deception in research would “have the egregious consequence of preventing researchers from carrying out a wide range of important studies” (Kimmel, 1998, p. 805).

Additionally, findings suggest that deception is not harmful to subjects. Christensen’s (1988) review of the literature found “that research participants do not perceive that they are harmed and do not seem to mind being misled” (p. 668). Furthermore, those participating in experiments involving deception “reported having enjoyed the experience more and perceived more educational benefit” than those who participated in non-deceptive experiments (p. 668).

Lastly, it has also been suggested that an unpleasant treatment used in a deception study or the unpleasant implications of the outcome of a deception study may be the underlying reason that a study using deception is perceived as unethical in nature, rather than the actual deception itself (Broder, 1998, p. 806; Christensen, 1988, p. 671).”
https://en.wikipedia.org/wiki/Deception#In_psychological_research

Back to Dr.Healy:
“The bottom line is the average drug has at least 100 effects. Using clinical trials we have been hugely successful in hypnotizing doctors and patients to focus on one effect and to miss the other 99. This pharmagnosia is a major driver of Pharmageddon (See Marilyn’s Curse). If the climate change encroaching on healthcare is to be rolled back, we need to manage pharmagnosia.

We can still have a market solution but it needs to be a Comparative Safety market rather than a Comparative Efficacy market.”
Witty A: Report to the President:
http://davidhealy.org/witty-a-report-to-the-president/

Advertisements
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s