Science funding is broken. Thinkable wants to help fix it.

Logo thinkable

Thinkable is a promising new crowd funding connection between researchers and sponsors (including the public at large). Founded by oceanographer and chief scientist Ben McNeil “the idea for Thinkable comes out of Ben’s frustration over the lack of funding for basic research and a passion for blue-sky thinking.” Ben’s recent arstechnica essay is a good introduction to why he believes science funding is broken; paired with the solution proposed by Thinkable: Is there a creativity deficit in science? If so, the current funding system shares much of the blame.

I won’t try to outline how the Thinkable platform and ecosystem works — the Thinkable website is very well-designed, so you’ll learn more about the venture by just jumping in — and be sure to sign up in your role as a sponsor or a researcher. I decided that the best way to evaluate Thinkable is to participate: I’ve subscribed to sponsor Martin Rees whose current project funding is passing the 50% level: How can we stop blood vessels becoming damaged and sticky during inflammation?

Here’s some of the reasons I’m excited about Thinkable:

  1. Taking risks is absolutely fundamental to real progress in science and technology. The existing institutional funding channels are highly risk averse — “crazy ideas need not apply here”.
  2. The path to breakthroughs is cobbled by mistakes. Mistakes are where most of the learning happens.
  3. Those characteristics are familiar to entrepreneurs who are successful innovators. The venture capitalists who consistently make superior returns know this very well. That’s why Silicon Valley slang is peppered with phrases like “Fail fast” and “Pivot”. “Let’s invest through the pivot” has probably been spoken more than once by a VC looking at superb founders (translation: “these guys are so good we want to work with them, even though their idea is probably going to fail”).
  4. Thinkable looks to be administratively very lightweight — so that funding goes to support research, not overheads. I understand that 87% of sponsor funding is delivered: after 10% to support the Thinkable platform and about 3% for payment processing fees (Visa etc. Note that once Thinkable can rely on crypto currency payment processing that 3% will fall near to zero) 

In my next post on Thinkable I hope to be able to explain who is funding the venture. Please help spread awareness of the Thinkable platform. If this takes off in a big way we could be helping to Change the World.

Chemical-Free Products: The Complete List

Derek Lowe

Here’s a comprehensive review of chemical-free consumer products, courtesy of Nature Chemistry. I’m flattered to have been listed as a potential referee for this manuscript, which truly does provide the most complete list possible of chemical-free cleaners, cosmetics, and every other class of commercially available product. Along similar lines, I can also recommend (…snip…)

Source: by research chemist Derek Lowe. Derek is a remarkable insider source on preclinical drug discovery.  

“Fallacy is instantaneous but truth works at the speed of science”

fukushimaplume.png

Do you recognize the thumbnail at left of the Fukushima radiation plume spreading all over the Pacific? If that’s what you think the image is you definitely will want to read on. If you recognize the thumbnail as the NOAA tsunami wave height model published the day after the Tohoku earthquake — then I hope you find some useful resources here.

I wish I had written No, but in all seriousness…. But I’m very happy that Alistair Dove did write this essay on critical thinking.  This is one of those pieces that we are so happy to find and share! Alistair sees a cross section of flawed reasoning in the comments that appear on the group blog Deep Sea News. E.g.,

… examples of the sort of reasoning that we have seen in comments, emails and tweets about the above examples:

  • Starfish wasting disease. Starfish are melting. Radiation leaked into the ocean at Fukushima. Therefore Fukushima caused the starfish melting.
  • Hurricane/Superstorm Sandy. Hurricane Sandy happened. Then dolphins began dying on the Atlantic coast. Therefore Sandy caused the Atlantic dolphin UME.
  • The “great Pacific garbage patch”. There’s a giant patch of garbage out there. If we could just sort of scoop it up, that would be good. Someone should invent something to do that.
  • The Long Island Sound lobster fishery. “They” sprayed insecticides in the tri-state area to control mosquito populations. Around the same time, lobsters died. Therefore insecticide spraying killed lobsters.

I see the “Backfire Effect” every day: 

A related problem is that in the time between when people first propose a fallacious cause, and when the true cause is revealed through reason and research, the fallacious one can become ingrained like an Alabama tick.  Once people get an idea in their head, even if it’s wrong, getting them to let go of it can be bloody hard.  Indeed, there’s a term for this; it’s called “the Backfire Effect”: when confronting someone with data contrary to their position in an argument, counter-intuitively results in their digging their heels in even more.  In this phenomenon, the media has to accept a sizable chunk of responsibility because, as the lobster example shows, the deadline-driven world of media agencies is more aligned with the rapid pace of the logical fallacy than with the slow and deliberate pace of scientific research.

Alastair closes with a checklist that we should share where it may do some good:

…so it can’t hurt for all of us to think consciously about our thinking, me included. To that end, I offer the following, non-comprehensive list of things to consider before you hit “Reply” on that cleverly crafted response. If you have additional suggestions I invite you to add them in the comments.

  • Am I seeing a pattern that could just be a statistical rarity, and leaping to a conclusion?
  • Am I connecting two events causally, because they occurred close together in space or time?
  • Am I inferring a cause in the absence of evidence for any other explanation?
  • Am I thinking inductively “It must have been such and such…”
  • Am I framing the issue as a false dichotomy (debating only two possible causes, when there may be many others). In other words, am I framing the issue as an argument with two sides, rather than a lively discussion about complex issues?
  • Am I attacking my “opponent” and/or his/her credentials, rather than his/her argument?
  • Am I arguing something simply because other/many people believe it to be true?
  • Am I ignoring data because I don’t want to lose face by conceding that I may be wrong?
  • Am I cherry picking data that support my position (a cognitive bias)

So, I hope that’s enough to motivate you to read Alastair’s No, but in all seriousness… You will be happy you gave it your attention and reflection.

Three climate scientists examine recent slowdown (or ‘pause’) and online science communication

The recent slowdown (or ‘pause’) in global surface temperature rise is a hot topic for climate scientists and the wider public. We discuss how climate scientists have tried to communicate the pause and suggest that ‘many-to-many’ communication offers a key opportunity to directly engage with the public.

I recommend “Pause for thought” in Nature Climate Change. This very short essay by Ed Hawkins, Tamsin Edwards and Doug McNeall is ungated, after free registration. You can get a preview of the technical overview by studying the two following charts carefully. You’ll need to pay attention to the chart key underneath – there is a lot of information compressed into the two panels.

 

Observed global mean surface air temperatures (HadCRUT433, solid black line) and recent 1998–2012 trend (dashed black line), compared with ten simulations of the CSIRO Mk3.6 global climate model, which all use the RCP6.0 forcing pathway (grey lines). The grey shading represents the 16–84% ensemble spread (quantiles smoothed with a 7-year running mean for clarity); the ensemble mean trend is around 0.20 °C per decade. Two different realizations are highlighted (blue), and linear trends for specific interesting periods are shown (red, green, purple lines). a, The highlighted realization shows a strong warming in the 1998–2012 period, but a 15-year period of no warming around the 2030s. b, The highlighted realization is more similar to the observations for 1998–2012, but undergoes a more rapid warming around the 2020s. Note also that this realization appears outside the ensemble spread for 9 out of 10 consecutive years from 2003–2012.

The charts and discussion illustrate a central truth of climate science – the results are often only understood in a framework of statistics. The pretty, clean projected temperature curves that we see in the media are heavily smoothed over many runs of multiple models. That presentation conceals the natural variability that is part of the challenge of understanding, then testing hypotheses against observations. It is similar to the agonizing process at the Large Hadron Collidor (LHC) as the teams tried to develop enough data to tease out a sufficiently confident identification of an anomaly corresponding to the Higgs.

If you have a specific question about the authors’ presentation, you can ask the scientists directly on twitter. It is uncommon for authors to reveal their twitter handles in a paper, so please don’t make them regret the open door!

I recommend two other articles in this Nature Climate Change series:

1. Heat hide and seek [PDF] Natural variability can explain fluctuations in surface temperatures but can it account for the current slowdown in warming? The authors offer an excellent summary of the more promising current research, including particularly the variability in heat distribution such as

  • El Niño/Southern Oscillation
  • Pacific Decadal Oscillation
  • Atlantic Multi-decadal Oscillation

2. Media discourse on the climate slowdown where I learned among other things that the biggest recent media spike seems to be in Oceania – where we are presently (cruising). Australia has been suffering from a severe drought – that no doubt generates increased interest in climate.

What do we know about the Tohoku tsunami debris field?

Click the model static frame to display the animation

We have a direct personal interest in evaluating the 2014 risk of debris-collisions in the captioned North Pacific area in 2014. We will be sailing from NZ to Alaska again via Tahiti and Hawaii. There is always ocean debris – plastic flotsam, and dangerous debris from fishing nets and fishing gear to shipping containers. And we have first-hand experience with the too-common incidence of logs and trees floating near shore and in the channels of the Pacific Northwest.

Log

The reason we are concerned about the open ocean Hawaii-Alaska tsunami debris is the possibility of high incidence of the semi-submerged heavy objects that can put holes in the boat. The alert watch-keeper will not see these dangers ahead when sailing fast at night. Collision with logs, trees, fuel drums, docks at 12-15 knots — that’s an event we would prefer to avoid. The IPRC sightings describe the Kalama Beach log at left as “Large log, length app. 20′, diameter app. 3′.”

It’s challenging to assess whether dangerous-collision risks are materially greater post-tsunami. What is missing is the perspective: “how different is it today from background?” I’ve analyzed that question regarding the radiation/contamination risks. E.g., the media hyperventilating about Fukushima impacts on the Pacific Ocean. I think I understand the Pacific Ocean contamination risks fairly well. The contamination analysis is easier because we have a sound series of observations combined with a good theory.

We are accustomed to access to data on ocean conditions based upon satellite observations. Near real-time temperature, currents, and sea state are available for most of the planet. Unfortunately the satellite imagery doesn’t tell us about the collision hazards that could be waiting for our yacht (two hulls, twice the collision opportunity, right?). The reason is that the Tohoku debris is so dispersed and physically small that it has not been detectable by satellite imagery since shortly after the March 2011 tsunami. The NOAA special website on the Tohoku debris is a good place to begin your research. Next is the NOAA Severe Marine Debris Event Report: Japan Tsunami Marine Debris of June 2013.

Because of the size of the potential debris area and the narrow coverage of high resolution satellite imagery, it became clear that a full coverage survey of the situation was not possible or practical. Shortly after the tsunami, a variety of types of satellite imagery (e.g., ENVISAT, LANDSAT, SPOT, ASTER) became available from a variety of sources, including U.S. Geological Survey, the International Disaster Charter, the European Space Agency rolling archives, and a joint NASA/JAXA Web site. This imagery was analyzed by …(NOAA) NESDIS … and indicated fields of debris that were visible in 15–30 m resolution data. More recently, the National Geospatial-Intelligence Agency (NGA) has provided NOAA with higher resolution (1–5 m) satellite imagery.

Although debris fields are no longer visible by satellite, dispersed, bouyant items continue to float in the North Pacific. This assorted debris, referred to henceforth as Japan Tsunami Marine Debris (JTMD) ranges from derelict vessels and large floating docks to small household items, with fishing gear and construction items of various sizes and compositions in between…

Because we do not have wide-field satellite data we know very little about the actual mass and spatial density of the debris distribution. Our data sources are primarily coastal observations of objects deposited on beaches (Midway, Hawaii, NA mainland of CA – OR – WA – BC – AK, plus a few random observations from sailors at sea.

Oceanographic modeling:

NOAA GNOME model: With so little empirical data our picture of what is out there in the Pacific comes largely from modeling. There are two primary models that we have identified, the NOAA GNOME model and the IPRC model. NOAA periodically updates their summary page about the latest GNOME predictions: NOAA Tsunami Debris Model Sep 2013. The NOAA update page is a concise introduction to the modeling. The next graphic shows the big picture, extracted from that NOAA page. The hatching denotes the predicted highest density of debris with 1% windage at the end of the simulation. 

NOAA GNOME 130923

IRPC Model: This model is publicly available, run by the International Pacific Research Center Marine in the School Of Ocean And Earth Science And Technology at The University Of Hawaii. My understanding is that IPRC updates the model initialization and parameters based on observations – so the near term model frames are closer to reality than far future. The model doesn’t really tell us about concentrations of threatening semi-submerged objects. It does however give us a visualization of what we can infer from the current/wind drift physics.

Click the model static frame to display the animation

This link to the IPRC Tsunami Debris Models page provides a tabulation of the most recent model runs for 0% to 5% windage levels. The January 2014 prediction frame correlates with the reports of deadheads in the area from Hawaii to the west coast of the US. The visualization shows the higher windage debris rapidly drifting east to the shorelines of Alaska to California, while the deadheads and similar collect in the area east of Hawaii.

 On the ground observations:

In sailors terms, 1% windage is typically a “deadhead”, an almost totally submerged tree or equivalent timber [Windward Oahu report]. That is what the owner of the 72-foot trimaran Lending Club/Tritium Racing, John Sangmeister, thinks they hit multiple times during the 2013 Transpac race from Los Angeles to Hawaii.

The image at left is of one of the damaged foils aboard Lending Club. We do not wish to be taking such snapshots on our passage through those waters. On the collisions, Kimball Livingston reported:

The first time they hit, they slipped the daggerboard out of its housing, flipped it over and went on with business. Then they hit another time, and re-flipped it, and they’ve hit a few more times without major damage, but wow. The all star crew included Gino Morrelli, Howie Hamlin, Ryan Breymaier and Peter Stoneberg. One interim report: “Logs, logs, and more logs. Sailing normally, but with a large amount of vibration due to the damage.”

We have similar first hand reports of collisions with semi-submerged object from a Kiwi friend who was sailing in the same Transpac fleet. During daylight they could see logs and trees in various stages of submersion. 

More reports from three other 2013 Transpac yachts:

**Sighted 15′ chunk of floating telephone pole.
s/v Ciminal Mischief
7/13/2013

**Sighted 35′ floating tree trunk.
s/v Between The Sheets
7/13/2013

**Large pieces of debris, a couple of pieces of lumber looked like parts of a house

**Struck what may have been a 10′ section of telephone pole.
s/v Manatea
7/12/2013

So, what do we know from “on the ground” observations? Every country and state bordering the North Pacific has debris reporting schemes – based upon shore or near-shore sightings. NOAA collects sighting reports for the North Pacific (via email to DisasterDebris@noaa.gov if you have a sighting to report). Here are the maps of the sightings reports. Unfortunately “the absence of evidence is not the evidence of absence”. In this case the absence of reports may well indicate an absence of vessels to sight debris. Here is a clip from the November 2013 overview map:

NewImage

These general sightings reports are not useful to us. An animation showing the date of sighting would give some very general indication of trends – that would be useful. And the majority of the reports are high windage items that aren’t a threat to us. As I write the most indicative observations are from the 2013 Transpac reports like Lending Club. Please email us if you know of any other first-hand sightings reports – especially of the type that threaten small boats like ours.

More Resources:

long log.png

 

A map of the Gulf of Alaska, BC, Washington and Oregon. An example of predicted debris concentrations (derived from satellite-tracked ‘drifters’) show Alaska is an immense accumulation point. (Figure adapted from Lumpkin et al. (2012))

Image from Alaska Marine Stewardship Foundation

 

Journey To The Center Of The Gyre The Fate Of The Tohoku Tsunami Debris Field  by Peter Franks in the Integrative Oceanography Division of Scripps Institution of Oceanography. Very descriptive of marine environment impacts, but not relevant to potential impacts with trees and large wooden structures.

The File Drawer Effect

This is not an academics-only esoteric debate. It is about whether new, effective drugs are released. It is about how accurate is our knowledge of reality. Here is astronomer Phil Plait quoted in this excellent essay on Survivorship Bias:

For far too long, studies that fizzled out or showed insignificant results have not been submitted for publication at the same level as studies that end up with positive results, or even worse, they’ve been rejected by prominent journals. Left unchecked, over time you end up with science journals that only present the survivors of the journal process – studies showing significance. Psychologists are calling it the File Drawer Effect. The studies that disprove or weaken the hypotheses of high-profile studies seem to get stuffed in the file drawer, so to speak. Many scientists are pushing for the widespread publication of replication, failure, and insignificance. Only then, they argue, will the science journals and the journalism that reports on them accurately describe the world being explored. Science above all will need to root out survivorship, but it won’t be easy. This particular bias is especially pernicious, said Plait, because it is almost invisible by definition. ”The only way you can spot it is to always ask: what am I missing? Is what I’m seeing all there is? What am I not seeing? Those are incredibly difficult questions to answer, and not always answerable. But if you don’t ask them, then by definition you can’t answer them.”

We subscribe to the AllTrails network – I can’t recommend their work highly enough. See also Publication bias in Wikipedia and All Results Journals.

 

Down With the Luxury Journals?

Dereck Lowe:

Nobel laureate Randy Schekman has stirred up a lot of controversy with his public declaration that he will send no more manuscripts to Nature, Science, Cell and such “luxury journals”.

. . .The prevailing structures of personal reputation and career advancement mean the biggest rewards often follow the flashiest work, not the best. Those of us who follow these incentives are being entirely rational – I have followed them myself – but we do not always best serve our profession's interests, let alone those of humanity and society.

We all know what distorting incentives have done to finance and banking. The incentives my colleagues face are not huge bonuses, but the professional rewards that accompany publication in prestigious journals – chiefly Nature, Cell and Science.

These luxury journals are supposed to be the epitome of quality, publishing only the best research. Because funding and appointment panels often use place of publication as a proxy for quality of science, appearing in these titles often leads to grants and professorships. But the big journals' reputations are only partly warranted. While they publish many outstanding papers, they do not publish only outstanding papers. Neither are they the only publishers of outstanding research.

These journals aggressively curate their brands, in ways more conducive to selling subscriptions than to stimulating the most important research. Like fashion designers who create limited-edition handbags or suits, they know scarcity stokes demand, so they artificially restrict the number of papers they accept. The exclusive brands are then marketed with a gimmick called “impact factor”. . .

Making sense of uncertainty – why uncertainty is part of science

Here we have 28 pages of goodness – a very compact users' guide for reading scientific product.

Put crudely, scientists tend to think science is about things we don't know fully. Journalists and politicians (and to a large extent, many people) think it is about things we do know, and they're impatient with ‘maybe's.

The problem here is that not all scientific knowledge is the same, so treating it as though it were is misleading

For your reference library.

 

 

Michael Eisen: ‘The Past, Present and Future of Scholarly Publishing

Very important manifesto – from PLOS co-founder Prof. Michael Eisen's lecture at the California Commonwealth Club:

…Universities are, in essence, giving an incredibly valuable product – the end result of an investment of more than a hundred billion dollars of public funds every year – to publishers for free, and then they are paying them an additional ten billion dollars a year to lock these papers away where almost nobody can access them.

It would be funny if it weren’t so tragically insane.

To appreciate just how bizarre this arrangement is, I like the following metaphor. Imagine you are an obstetrician setting up a new practice. Your colleagues all make their money by charging parents a fee for each baby they deliver. It’s a good living. But you have a better idea. In exchange for YOUR services you will demand that parents give every baby you deliver over to you for adoption, in return for which you agree to lease these babies back to their parents provided they pay your annual subscription fee.

Of course no sane parent would agree to these terms. But the scientific community has.

…So what would be better? The outlines of an ideal system are simple to spell out. There should be no journal hierarchy, only broad journals like PLOS ONE. When papers are submitted to these journals, they should be immediately made available for free online – clearly marked to indicate that they have not yet been reviewed, but there to be used by people in the field capable of deciding on their own if the work is sound and important.

The journal would then organize a different type of peer review, in which experts in the field were asked if the paper is technically sound – as we currently do at PLOS ONE – but also what kinds of scientists would find this paper interesting, and how important should it be to them. This assessment would then be attached to the paper – there for everyone to see and use as they saw fit, whether it be to find papers, assess the contributions of the authors, or whatever.

This simple process would capture all of the value in the current peer review system while shedding most of its flaws. It would get papers out fast to people most able to build on them, but would provide everyone else with a way to know which papers are relevant to them and a guide to their quality and import.

By replacing the current journal hierarchy with a structured classification of research areas and levels of interest, this new system would undermine the generally poisonous “winner take all” attitude associated with publication in Science, Nature and their ilk. And by devaluing assessment made at the time of publication, this new system would facilitate the development of a robust system of post publication peer review in which individuals or groups could submit their own assessments of papers at any point after they were published. Papers could be updated to respond to comments or to new information, and we would finally make the published scientific literature as dynamic as science itself. And it would all be there for anyone, anywhere to not just access, but participate in.

…If we all do this, them maybe the next time someone like Aaron Swartz comes along and tries to access every scientific paper every written, instead of finding the FBI, they’ll find a giant green button that says “Download Now”.

The lecture transcript.

 

UC research should be free

Michael B. Eisen is a personal hero of mine. Co-founder of the biggest Open Access journal PLOS ONE, he campaigns every day to liberate research publications from being imprisoned behind the ridiculous paywalls of the publishers. This op-ed neatly summarizes the case for open access in the University of California system: 

It is a felony to share knowledge created by the faculty, staff and students of the University of California with the public.

Wait. What?

In 2011, online rights activist Aaron Swartz was accused of using the MIT computer network to download millions of scholarly journal articles with the intent of freely sharing them with the public. Federal prosecutors aggressively pursued charges against him, and, earlier this month, with a trial looming, Swartz killed himself.

The Justice Department has faced intense scrutiny for its senseless decision to turn this victimless act into a major case, but the real culprits in this tragedy are all the universities across the world that allowed articles that rightfully belong to the public to fall into private hands in the first place.

Every day, faculty, staff and students of the University of California hand over control of papers describing their ideas and discoveries to publishers, most of whom immediately lock them up behind expensive paywalls. They do this not only with the university’s knowledge— they do it with its complicity.

That the public does not have unlimited access to the intellectual output of academic scholars and scientists is one of the greatest-ever failures of vision and leadership from the men and women who run our research universities — all the more so at a publicly funded institution like the University of California.

When the Internet began to take off in the mid-1990’s, it created the opportunity to do something scholars had been dreaming of for millennia — to gather all of the writings of scholars past and present together in a single online public library — a free, globally accessible version of the ancient library in Alexandria.

But 20 years on and we are barely any closer to achieving this goal. Instead of posting their work online, scholars send them to journals, most of which condition publication on receipt of the authors’ copyright. These journals then exercise their exclusive rights to distribute these works by demanding payment for access to their collections.

If you have not yet published in a scholarly journal, you may not realize just how absurd this transaction is. Scholars at the UC system and every other research university on the planet voluntarily hand over control of their work to publishers, work that the same universities have to immediately turn around and buy back. And this is not a minor transaction — revenue for scholarly journals exceeds an estimated $9 billion per year.