Writing

*Deep* Learning

Gunbir Baveja

March 22, 2023

7 min read

Okay.

Let's talk about learning.

I guess learning about Fully General Counterarguments wasn't the best thing for my communication course where I have to 'counter' some claims.

Hide what you want to hide, so you can control what you want to control.

learning linearities flanked by hand-coded nonlinearities

When talking about “science”, I exclude non-empirical or theoretical subjects that are not dependent on real world data such as mathematics and theoretical physics. “Bias” refers to the inclination to favor one perspective over others. The opposite of bias is what the scientific method, and peer review aim to be, i.e. “neutral”. (Castillo, 2013)

The scientific method is an approach to research that attempts to be as impartial as possible in the pursuit of comprehending the world and attaining the objective truth. With its methodically idealistic approach to gathering and analyzing data, the scientific method proves to be the cornerstone of human progress. Nevertheless, despite scientists' best attempts to be objective, it has become more obvious that the search for objective truth is a considerably more difficult and sophisticated process than previously believed. This is because a variety of universal biases, both conscious and unconscious, that are beyond our control inherently impact our perceptions and opinions. Scientists are not devoid of this and, as a result of these biases, are affected in how they gather and analyze information. This essay will examine the claim that despite our best efforts, science is ultimately influenced by the researchers’ experience and expectations.

First and foremost- the influence of expectations on human perception can lead to the distortion of scientific results. One illustrative instance is the research conducted by Arthur Worthington in the late 19th century, where he observed the behavior of fluid droplets. Initially, he relied on his visual observations to track the fluid dynamics, resulting in drawings of radially symmetrical droplets. However, when he utilized photography to collect data under similar conditions, he discovered irregularly shaped droplets - including a few that he had disregarded in his previous findings as outliers. This highlights how even the objective and precisely monitored experiments can be unconsciously influenced by preconceptions. There are numerous other cases (Kuhn 1962, 111, 113-114, 120-1) where scientists may later realize the importance of information they previously ignored because of new ideas they have. Nevertheless, these forms of biases are usually corrected upon by other researchers.

Although held in high regard, extrinsic factors such as peer review may also inhibit scientific objectivity to a significant extent by allowing reviewers to critique the research article on the basis of methodology and any possible intrusion of biases. In a controlled study, (Blackburn, 2016) analyzed 7,383 sets of post-research review ratings and found that authoring reviewers gave lower ratings than non-authoring reviewers. There exist many other studies (Sandstrom, 2009; Ernst et al., 2000; Ernst et al., 1992; Helmer et al., 2017) that show, using detailed studies of existing literature, the existence of biases in peer review proving that peer review can be fallible. These findings warrant legible yet unconventional papers or papers written by non-influential scientists to be at a disadvantage in the peer review process, condemning the quality aspect of the scientific method.

A theory which counters the claim is that the fact that values do sometimes enter into scientific reasoning does not by itself settle the question of whether it would be better if they did not. (Boyd et al., 2021) Similarly, values have been brought into a positive light by science philosophers such as Anderson (2004) and Intemann (2021) who showed that values play a role in scientific reasoning. Therefore, they noted above that the mere involvement of theory does not disqualify the scientific method and instead it matters on how the presuppositions and beliefs are accounted for in research.

Despite these claims, there stands a robust argument which states that scientists are incentivized by external motivations which leads to influenced experimentation and review. The earliest evidence for this came from (Berlin et al., 1989), who examined a cohort of published reports of clinical trials, and showed that smaller studies reported significantly greater mean treatment effects, in terms of disease-free survival, and response rates. There is also evidence that acceptance of abstracts is subject to publication bias. (Koren et al., 1989) followed 58 abstracts submitted to the Society of Pediatric Research between 1980 and 1989 that examined fetal outcome after in utero exposure to cocaine and showed that 57% of the examined abstracts had significant results, while only 11% showed no adverse effect was accepted. (Lexchin J. et al., 2003) assessed profiles of influential and relatively non-influential scientists and their studies and suggested that small, highly-cited, and earlier studies yield inflated results (accounting for 27% of the variance in primary outcomes).

In the face of all these arguments, though, the scientific method continues to evolve as I write this essay, and it will continue to do so forever, as proponents of the ‘scientific method’ help improve it. Although 33-50% of all scientific discoveries are estimated to have been stumbled upon, rather than sought out, the scientific method proves to be the most robust and resistant in the face of human errors and bias (Dunbar and Fugelsang., 2004).

References

  • Dunbar, K., & Fugelsang, J. A. (2004). Causal thinking in science: How scientists and students interpret the unexpected. Scientific and Technical Thinking, 57–79.

  • Castillo, M. (2013). The Scientific Method: A Need for Something Better? American Journal of Neuroradiology, 34(9), 1669–1671. https://doi.org/10.3174/ajnr.a3401

  • BRUNER, J. S., & POSTMAN, L. (1949). ON THE PERCEPTION OF INCONGRUITY: A PARADIGM. Journal of Personality, 18(2), 206–223. https://doi.org/10.1111/j.1467-6494.1949.tb01241.x

  • Kuhn, Thomas S. (1977). Objectivity, value judgment, and theory choice. In The Essential Tension: Selected Studies in Scientific Tradition and Change. University of Chicago Press. pp. 320--39

  • Azzouni, J. (2004). Theory, Observation and Scientific Realism. The British Journal for the Philosophy of Science. https://doi.org/10.1093/bjps/55.3.371

  • Yap, A. (2016). Feminist Radical Empiricism, Values, and Evidence. Hypatia, 31(1), 58–73. https://doi.org/10.1111/hypa.12221

  • When Peer Review Falters - NYTimes.com. (2011, January 6). Nytimes.Com. https://www.nytimes.com/roomfordebate/2011/01/06/the-esp-study-when-science -goes-psychic/when-peer-review-falters

  • Theory and Observation in Science (Stanford Encyclopedia of Philosophy/Winter 2021 Edition). (2021, June 14). https://plato.stanford.edu/archives/win2021/entries/science-theory-observation/

  • Blackburn, J. L., & Hakel, M. D. (2006). An Examination of Sources of Peer-Review Bias. Psychological Science, 17(5), 378–382. https://doi.org/10.1111/j.1467-9280.2006.01715.x

  • Resch, K. I., Ernst, E., & Garrow, J. (2000). A randomized controlled study of reviewer bias against an unconventional therapy. Journal of the Royal Society of Medicine, 93(4), 164–167. https://doi.org/10.1177/014107680009300402

  • Sandström, U. (2008). Cognitive Bias in Peer Review : a New Approach. 12th International Conference of the International-Society-for-Scientometrics-and-Informetrics. Rio De Janeiro, BRAZIL. JUL 14-17, 2009, 742–746.

  • Helmer, M., Schottdorf, M., Neef, A., & Battaglia, D. (2017). Gender bias in scholarly peer review. ELife, 6. https://doi.org/10.7554/elife.21718

  • Anderson, E. (2004). Uses of Value Judgments in Science: A General Argument, with Lessons from a Case Study of Feminist Research on Divorce. Hypatia: A Journal of Feminist Philosophy, 19(1), 1–24. https://doi.org/10.1111/j.1527-2001.2004.tb01266.x

  • Intemann, K. (2020). Feminist Perspectives on Values in Science. Routledge EBooks, 201–215. https://doi.org/10.4324/9780429507731-19

  • Berlin, J. A., Begg, C. B., & Louis, T. A. (1989). An Assessment of Publication Bias Using a Sample of Published Clinical Trials. Journal of the American Statistical Association, 84(406), 381–392. https://doi.org/10.1080/01621459.1989.10478782

  • Koren, G., Madjunkova, S., & Maltepe, C. (2014). Bias against the null hypothesis: scaring pregnant women about drugs in pregnancy. Canadian Family Physician.

  • Lexchin, J., Bero, L., Djulbegovic, B., & Clark, O. (2003). Pharmaceutical industry sponsorship and research outcome and quality: systematic review. BMJ, 326(7400), 1167–1170. https://doi.org/10.1136/bmj.326.7400.1167

  • Kuhn, T. (1962). The Structure of Scientific Revolutions. Physics Today, 16(4), 69. https://doi.org/10.1063/1.3050879

~ Prioritize yourself.--------