The Skeptics Society & Skeptic magazine


The Waning of Scientific Innovation

According to a recent report published in the scientific journal Nature,1 innovations in science are waning. Over 3.9 million patents and 45 million academic articles published since World War II were analyzed and the article’s conclusion is staggering: innovation is claimed to have dropped by at least 90 percent!

Nothing New Under the Sun

If innovation is declining, how do we make sense of world-changing recent inventions such as artificial intelligence or robot surgeon arms? And what exactly is the difference between scientific innovation, invention, and improvisation? In the Nature article, “innovation” refers to the most upstream discoveries that eventually lead to new products that would benefit people and other inhabitants of the planet. In effect, the article asserts the source of new ideas is drying up.

Camera obscura (1850)

Camera obscura (1850)
(Science Museum Group Collection)

The American research methodologist Lee Sechrest once put forth the provocative assertion that our current science and technology are mere updates of an earlier original innovation. The airplane, automobile, telephone, radio, motion picture, and still camera were present a century ago. They are still very much with us, simply updated with electronics. Indeed, Professor Sechrest’s own profession of psychology began by appropriating vocabulary and constructs from centuries-old physics, metaphysics, and philosophy. “Extravert” was mentioned by physicist Robert Boyle in 1657, while “introvert and introversion” date back to 1654 when theologian Thomas Gataker described “introversion” as thoughts turned inward by fasting and praying.2 Psychologists borrowed these constructs and more, and then set about measuring them.

Psychology had to start someplace. Borrowing from metaphysical antecedents was efficient. Then, psychology and all other sciences move beyond what we have seen to what we can see unrestricted, without the baggage of the past. This is the key to innovation: seeing things with new eyes.

If we cannot extricate ourselves from learned and habitual ways of perceiving, then Newton’s “standing on the shoulders of giants” takes on a dark side. New scientists may be overly dependent on their declarative learning experience. Too often, change only comes when a new paradigm wipes out the certainty of a previous one. Indeed, dispensing with old ideas more quickly could make room for more innovation. In his book, The Structure of Scientific Revolutions, Thomas Kuhn tried to rev up the paradigm boom-and-bust cycle in science.3 He admired the way psychologists regularly question their own research methods4 and thought the hard sciences could benefit from a similar and frequent appraisal. Sometimes an entire generation must die off—literally—before a new, more efficient paradigm can win the field. Kuhn thought this was a waste of time, and that science should be more adaptable and fluid.

A true skeptic always aims to dig deeper, even when discussing work published by a very respectable source. The Nature paper measures the impact of publications and patents as degrees of innovation. However, that’s not quite the whole story—some studies suggest that patents actually suppress innovation!5, 6

In the 1890s, Thomas Edison was the king of innovation and patents. Among his 1,093 international patents were the electric light, phonographs, direct electric current, and motion pictures. The Nature authors assume innovators publish results and file patents. Yet today, pharmaceutical and technology companies regularly do not publish all of their research findings—presumably for fear that this would tip off their competitors.

Thomas Edison & William Dickson’s Kinetograph (1890)

Thomas Edison & William Dickson’s Kinetograph (1890), often considered to be the first motion-picture camera

Austrian economist Joseph Schumpeter (1888–1950) made the case that only large companies could afford research facilities for innovation and argued that only PhD-level scientists might innovate. But is that really the case? According to the Nobel Laureate Edmund Phelps, no. He notes that the steam engine was invented 112 years before the laws of thermodynamics were written. Similarly, the Wright brothers flew a heavier-than-air vehicle without possessing degrees in aeronautical engineering. In contrast, their chief rival, Samuel Pierpont Langley, a professor with scientific honors, had ideas that literally flopped and died in the Potomac River in the form of an overpowered and under-designed kite.7 Phelps also points out that America in the 19th century was at least 100 years behind Europe in all sciences. Yet, starting in the 1820s, American innovation has led the world and never fallen behind since. The more scientifically educated nations of France, Belgium, Italy, and Britain have never managed to catch up.8 This leads us to consider the relationship between universities, research, and innovation next.

The University

Universities have not always been structured the way they are today. It was only after World War II that the U.S. federal government started to distribute significant research grants to universities. In this way, Washington could outsource its research ambitions. Note that the distribution of grants coincides with the drop in scientific innovation. Could universities then be interfering with science?

For those years, universities gave scientists space and time to perform research. In return, professors surrendered “indirect costs”—50 percent or more of grant monies to the university. In 1991, a scandal over 70 percent indirect costs erupted at Stanford University. The money bought then-president Donald Kennedy a grand piano and helped pay for a yacht. Kennedy was forced to resign, and Stanford had to return millions in grants to the U.S. government.9

The Benz Patent Motor Car No. 1

The Benz Patent Motor Car No. 1 (1885), widely regarded as the first gas-powered automobile. (© Mercedes-Benz AG)

Today’s university administrators pressure faculty to teach more, get higher student ratings, and they increasingly influence hiring, promotion, and tenure decisions. Research is a lower priority. As a result, a movement has been forming for science to leave the university.10 University scientists might form institutes organized in the image of law firms, complete with junior and senior partners. The research institute would keep the indirect costs. Scientists would have more control over their careers, and their research grants would not be spent on supporting academic bureaucracy or maintenance of 150-year-old buildings. Whether that’s a feasible idea remains to be seen.

Specialization

According to Professor A.J. Figueredo at the University of Arizona, The Flynn effect—the putative rise in human general intelligence over the decades11—may be the illusion of specialization. He posits there is actually a decrease in general intelligence, masked by a large rise in specialized intelligence. Maybe more specialization only makes us look smarter?

Indeed, perhaps one of the factors that limits innovation is the trend of over-specialization in our education, job training, and careers. While specialization means we can build valuable expertise, over-specialization creates blind spots: It limits the ability to adapt to new situations or solve problems that span multiple disciplines. Thomas Kuhn’s critique of the field of physics highlights how assumptions go unquestioned when one is deeply entrenched in a narrow discipline. Over-specialization is a lack of appreciation for the broader context and inhibits innovation from crossing knowledge silos.

Generalists, equipped with broad knowledge across many fields, excel in cross-disciplinary thinking, driving innovation by applying ideas from one field to another. Their strength lies in their ability to connect concepts across disparate domains, spurring the genesis of new ideas. In other words, the role of the generalist is critical for innovation.12 In scientific innovation, generalists question established parameters, challenge the status quo, bring fresh perspectives, and catalyze breakthroughs.13

Productivity

Frederick Winslow Taylor arrived on the scene at Bethlehem Steel Mills between 1899 and 1901, where he developed the system of scientific management.14 His tool of choice was the stopwatch. Taylor bragged that his work would revolutionize the workplace, and so lead to unparalleled increases in productivity and efficiency. Above all, “Taylorism” promoted uniformity and compliance. He cared little about innovation or learning from any of the “lesser minds” working out on the factory floor. Taylor’s definition of efficiency was timing workers carrying 90-pound pig iron ingots at the steel mill.15 While Taylor was persistent and persuasive enough for Harvard to establish the first graduate school of business in 1908, Bethlehem Steel fired Taylor for his negligible amount of valuable contributions.16

Later in the 20th Century, Six Sigma was introduced as a set of statistical tools to reduce the occurrence of manufacturing defects. Adhering to a strict methodology of Define, Measure, Analyze, Improve, and Control (DMAIC), Six Sigma offered a structured approach to problem solving.

In the 1980s, the Ford Motor Company noticed that car buyers preferred Ford Taurus models with transmissions from Japan. This posed a big challenge as Ford had a 40-acre transmission plant all under one roof in Batavia, Ohio. The company hired statistician Edwards Deming to teach them what he had been teaching the Japanese manufacturers on quality control since 1950.17 All 1,400 employees at the plant were required to learn statistics (I happened to be one of their statistics instructors.) Within a few years, American transmissions were as good as the Japanese ones. But how did this happen?

Donald Manson working as an employee of the Marconi Company, England 1906

Donald Manson working as an employee of the Marconi Company, England 1906. (Library and Archives Canada, PA-122236)

Until Deming’s work at Ford, productivity was measured in counts of pieces produced with a certain rejection rate. After Deming, accuracy of automotive parts machining dramatically improved, which increased productivity through fewer parts rejected, ultimately leading to smoother-running transmissions. Productivity increased because more parts could be used.

As valuable as tools such as Six Sigma are, they do not capture the full spectrum of variables and variable measurement in our current world of geopolitical complexity. Recent research shows that while innovation improves productivity, the reverse is not true.18 Some of our intricate and interdependent global systems resist basic quantification altogether. Creativity and innovation, being non-linear and often spontaneous processes, tend to resist conventional measurement.

SOLUTIONS

A Practical Tool

When—or even before—research slows towards a dead end, an economic tool called the Expected Value of Perfect Information (EVPI) can quantify the benefits of switching research tracks. The EVPI is calculated by assessing the potential outcomes, both positive and negative, of continuing down the current path compared to shifting to a new area of investigation. It estimates the cost of reducing uncertainty as a means of deciding whether or not to pursue further research.19

As an illustration, if a current study is indeterminate, a larger study with more statistical power is often recommended. The potential benefit as compared to its cost of further reducing the uncertainty makes decision-making easier. Sometimes, due to the cost, a larger study is just not worth it (though this may change in the future). A scientist might estimate thresholds at which the study could resume: “Until the cost of the blood test drops from $3,000 to below $160, we will halt further exploration.” Of course, this risks a competitor with deep pockets snatching the idea.

The approach has several implications for boosting scientific innovation:

  1. Optimize resource allocation at the planning stage. By quantifying the value of new information, researchers and funding bodies can make more informed decisions about where to direct their resources. This helps ensure that valuable time, funding, and expertise are not wasted on unproductive lines of inquiry.
  2. Improve informed risk-taking. The use of EVPI promotes a culture of informed risk-taking in scientific research. Explicitly quantifying the risks and potential benefits of pursuing a new research track encourages researchers to veer from well-trodden paths and venture into unexplored territory.
  3. Exercise collaborative decision-making. In larger research organizations and institutions, the use of EVPI facilitates collaborative decision-making. Different stakeholders can bring their perspectives to the table, and discuss and decide on how much solving the problem at hand is worth to them today. Otherwise, it can be set aside until it is worth more to solve it, or technology has progressed such that it is cheaper to take it on again.
  4. Accelerate innovation by anticipating dead ends. By avoiding dead ends sooner and redirecting resources to more promising areas, the pace of scientific innovation can be increased. Combined with the method of multiple working hypotheses and Bayesian updating (a revision in probabilistic beliefs based on gaining new knowledge), scientists can build upon successes and learn from failures more quickly, accelerating the overall progress of discovery.
Wright Military Flyer (1909)

Portrait of a 1909 Wright Military Flyer in the Early Flight gallery (Smithsonian National Air and Space Museum, Washington, DC)

Make Mistakes

Joseph Schumpeter, mentioned earlier in this article, viewed large firms as important conduits for economic change and innovation, owing to their resource availability, talent pool, and potential for the efficiencies of economies of scale. Bell Labs is a good example— the solar cell, the transistor, and components for calculators and computers all originated there. The monopolistic Bell Telephone Company made Bell Labs possible and inspired generations of scientists.

But while large organizations indeed present opportunities for innovation, they are just as likely to have dense bureaucracies and multiple layers of middle management that demand loyalty and conformity while stifling creativity. The challenge lies in harnessing these organizations’ capacity and resources while building a culture that encourages creativity and innovation.

Indeed, many organizations today have a low to zero tolerance for exploration and making mistakes. Exceptions include companies such as Google and 3M that allow employees a certain amount of “free time” to pursue their creative ideas, indicating a deliberate attempt to promote innovation within a structured environment.20 To innovate, companies must foster a growth mindset that values learning over perfection, recognizing failure as a stepping stone to innovation.

• • • • • •

Scientific innovation does not exist in a vacuum. It is the product of the interplay among various disciplines, concepts, and perspectives. A balanced approach to foster an environment ripe for innovation values both specialization and generalization. Specialists bring depth and precision, while generalists bring breadth and creativity.

As Edmund Phelps said, America has “pervasive indigenous innovation” that bypasses both education and scientific discovery in economic value. Think of the Wright Brothers. Phelps calls this the economic dynamism with which America led the world from 1820 through the 1960s. We know that America is doing something right because its adversaries go to great lengths to steal American ideas.21, 22 With the exception of defense intelligence, inventions dictated by despots are not innovative.

With regard to statistics, null hypothesis significance testing (NHST) and p-values are still reverently considered sine qua non of science.23, 24 It may take a generation or two to let NHST become just one tool among many (rather than essentially the sole tool) in the kit for analyzing outcomes of research. By bridging the divides between productivity and creativity, specialization and generalization, we may create fertile ground for scientific innovation. By embracing this complexity, we will be better equipped to face the challenges of the future and propel scientific progress.

To navigate the increasing uncertainty of the future, we must carefully examine accepted paradigms and dogmas of scientific research. Skeptics should encourage questioning even the very methods of science and measurement, and so foster diverse, critical, and cross-functional thinking. END

About the Author

J. Michael Menke is a research methodologist, biostatistician, and scientific measurement expert on the faculty at the University of Cincinnati Winkle College of Pharmacy and the University of Arizona School of Mind, Brain, and Behavior. He may be found on X at @MichaelMenkePhD.

References
  1. https://bit.ly/43tOAtT
  2. https://bit.ly/3Vt7MpB
  3. Kuhn, T. S. (1962). The Structure of Scientific Revolutions. University of Chicago Press.
  4. Mladenovic, B. (2022). The Last Writings of Thomas S. Kuhn: Incommensurability in Science. University of Chicago Press.
  5. https://bit.ly/3vnL47J
  6. https://bit.ly/43wYMBG
  7. McCullough, D. (2015). The Wright Brothers. Simon and Schuster.
  8. Phelps, E. (2013). Mass Flourishing: How Grassroots Innovation Created Jobs, Challenge, and Change. Princeton University Press.
  9. Pool, R., & Aldhous, P. (1991). Stanford Counts Cost of Overhead Scandal. Nature. 352(6335), 459.
  10. https://bit.ly/43OjQE9
  11. https://bit.ly/43A08eZ
  12. Epstein, D. (2019). Range: Why Generalists Triumph in a Specialized World. Riverhead Books.
  13. Ibid.
  14. Stewart, M. (2006). The Management Myth: Why the Experts Keep Getting it Wrong. W.W. Norton and Company.
  15. Ibid.
  16. Ibid.
  17. Deming, W.E. (2018). The New Economics for Industry, Government, Education. MIT Press.
  18. https://bit.ly/4atEWJI
  19. Pratt, J.W., Raiffa, H., & Schlaifer, R. (1995). Introduction to Statistical Decision Theory. MIT Press.
  20. https://bit.ly/3xaungr
  21. https://bit.ly/4a7VR50
  22. https://bit.ly/3TO9UXJ
  23. Kline, R.B. (2004). Beyond Significance Testing. American Psychological Association.
  24. Ziliak, S.T., & McCloskey, D.N. (2008). The Cult of Statistical Significance: How the Standard Error Costs Us Jobs, Justice, and Lives. University of Michigan Press.

This article was published on December 27, 2024.

 
Skeptic Magazine App on iPhone

SKEPTIC App

Whether at home or on the go, the SKEPTIC App is the easiest way to read your favorite articles. Within the app, users can purchase the current issue and back issues. Download the app today and get a 30-day free trial subscription.

Download the Skeptic Magazine App for iOS, available on the App Store
Download the Skeptic Magazine App for Android, available on Google Play
SKEPTIC • 3938 State St., Suite 101, Santa Barbara, CA, 93105-3114 • 1-805-576-9396 • Copyright © 1992–2024. All rights reserved • Privacy Policy