Sounds of 2016

2016 was an extraordinary year. Everyone appears to have their own take on what made this past year so remarkable, whether it was good or bad. There are dozens of ways to reflect and interpret what 2016 was like and what it has meant to people. For me, the easiest way to understand last year has been through music. Something I’ve always found remarkable about music is that way it can focus the emotions of thousands of people into a single thought. Music undeniably reflects our own fears, dreams, anxieties and aspirations back at us. It therefore provides a unique perspective on the past year. I’ve collected some of the songs that resonated with me throughout the year. They are songs that I feel are particularly important when I think about what 2016 meant to me and the people in my life.

When I first started putting these songs together I was struck by what seemed to be the most prominent recurring theme, 2016 was a year full of anxiety for many people. Many artists and their fans seemed to be resonating with lyrics describing the depths of this stress. Phantogram was singing about their teeth falling out of their head in You Don’t Get Me High Anymore, Vince Staples rapped about staying “at the Marriot having Kurt Cobain dreams,” in Loco, and Radiohead warns of a “low flying panic attack” in Burn the Witch. It wasn’t hard to find sources of anxiety in 2016. Whether it was the Brexit Referendum, the changing role of men in western society, the raging opioid epidemic, the slaughter of black Americans, or the disastrous 2016 presidential election, there was a reason for almost everyone to be nervous last year.

In this playlist there are more than 20 artists, but only four that appear more than once, Bon Iver, Flume, Beck and Macklemore. Bon Iver’s 22, A Million, happened to be my favorite album of this year, which is not something I expected in a year with a new release from Animal Collective. Justin Vernon and his band pressed the limits of sonic experimentation while maintaining accessibility and emotional depth in a way I’ve never seen. The album expresses an existential anxiety that seems to have infected the very structure of the songs. By blending auto-tuned lyrics, gospel verses, drum machines, and feedback, the band creates an experience which is far more than the contribution of any single song. The concepts in the album seem to be stretching the limits of the media in which they are instantiated. The album feel as if it is pushing for something more, but lacks the language to describe it. Following the release of 22, A Million Vernon and some friends from The National organized an experimental music festival in Berlin which aimed at reinventing the way music is valued. The festival experimented with the way music is produced and consumed, as Vernon put it in an interview with Pitchfork, “It’s not even about breaking down capitalism—it’s just about showing people a different way.” In a world where a politically inexperienced billionaire was elected president of the most powerful nation on the planet, this type of experimentation seems more important than ever.

For me, the most interesting dance music of 2016 came from Australian producer Flume, who released his second album Skin. The LP is astonishingly cohesive for a genre which values singles and remixes more than albums. Flume is an undeniable titan in electronic dance music, even if the entire industry is dead or dying. Skin comes across as surprisingly deep and thoughtful, although Flume never fails to make you want to move. Flume found inspiration in the notion of “the fabric of reality tearing,” which led directly to the production of Wall Fuck, the most aggressive and unsettling song on the album. However in 2016, for many people the fabric of society was being ripped apart as the Brexit referendum tore into European culture, and Donald Trump shredded the liberal ideals of young Americans.

My favorite track from Flume’s Skin was his collaboration with Beck, Tiny Cities. In many ways I think Beck made music in 2016 to try to make everyone feel better. Tiny Cities reflects on dealing with the loss of ideals. This can be seen in the repetition of “it was never perfect, it was never meant to last.”  Beck acknowledges how confusing it can be to sustain hope when everything you’re seeing seems to oppose it as he sings, “How can I convince myself to not believe in what I know? When all I see is dominoes falling up it as we go.” The lyrics in Tiny Cities come off as somber and contemplative but after listening to the song it’s difficult to not feel uplifted. In 2016 Beck also released Wow, which is arguably his biggest commercial success since 1994’s Loser. In a difficult year Wow was joyous. In my mind that single is inexorably linked to some of the most amazing news of 2016, like the detection of gravitational waves and the discovery that the closest star system to our sun contains a planet in the ‘habitable zone.’

Last year was a year of political upheaval, and as usual musicians had a lot to say about it. Macklemore was able to speak directly about two issues that really matter to me. First, his single Drug Dealer, with Ariana DeBoo, in which he addresses the moral, emotional and medical crisis that is the opioid epidemic in America, an issue he has personal experience with. Second, Macklemore was ready to jump into a collaboration with YG and G-Eazy to reiterate a sentiment I never miss an opportunity to share, “FUCK DONALD TRUMP.” On Nov 9th 2016, Phantogram’s Same Old Blues took on a new meaning and depth for me. Sarah Barthel’s lyrics seem to echo the concerns and sorrows of several important women in my life. The election of Donald Trump has been explained in many ways, but it is impossible to understand his success without admitting the role of misogyny in American culture. When Barthel describes “having this dream / Where I’m stuck in a hole and I can’t get out / There’s always something that’s pulling me down, down, down, ” it’s easy to imagine that the hole she’s describing is the national endorsement of sexism and misogyny represented by the 2016 Presidential Election. On Nov 9th many people, and particularly women understood exactly what Barthel meant when she said “Today I lost my future to the past.”

Many other artists also made powerful political and social statements in 2016. A Tribe Called Quest put out a new album for the first time in 18 years, leading with the single “We the People,” which reflects on the rising tides of racism, homophobia and xenophobia in the western world. The Washington punk-hardcore band G.L.O.S.S. (Girls Living Outside of Society’s Shit) rose to fame early in the year after the success of their 2015 Demo, proving that the core value of punk, cultural subversion, is alive and well and as important as ever. MIA questioned the values of pop culture and challenged our society’s choice of language in Borders, by wondering out loud why we refer to being poor as “broke,” and success as “slaying it.”  Girl Power supergroup, Nice as Fuck, led by Jenny Lewis, insisted that messages of peace and love are the ways forward, while “the shit that we talk is a smoke screen.”

2016 witnessed new types of musical experimentation. Kanye West’s The Life of Pablo (TLP) was supposed to be a living album, a concept which added to the confusion of the album’s release. The album rose to commercial and critical acclaim. Listeners tend to feel strongly about it, evidenced by the fact that a Pitchfork reader poll ranked it as both one of the most underrated and most overrated albums of the year. TLP featured a 44 second track titled I Love Kanye, in which Kanye West raps about Kanye West and reflects on how the world thinks Kanye West thinks about Kanye West. This track is either a passionate and intellectually stimulating attempt by an innovative artist to harness the creative power of ‘self-reference,’ or the earliest signs of Kayne’s pending mental break down. In stark contrast to the sheer production power of TLP was  Kendrick Lamar’s surprise album called untitled unmastered. This album was simple, raw, unstructured and honest. It gave us insight into the mind of a rising star in hip hop and it effortlessly topped the Billboard charts.

Last year gave us some heart-warming tracks, whether it was the intangible emotion in Animal Collective’s Golden Gal, or the explicit feelings of love expressed by Father John Misty in Real Love. Artists like M83, the Growlers and Jagwar Ma made us dance without needing to drop the bass. Run the Jewels and YG made it clear why we are mad. Whitney and James Blake made us stop and breathe. The Avalanches released their first album in 16 years Wildflower, which is composed of over 3,500 unique samples. Wildflower reminds us that music derives its power from its combinatorial nature. Music is made by individuals but it is an artifact of societies and cultures, evidence of our attempts to understand and communicate in way that language alone cannot accommodate. Music is important not only because people make it to represent themselves, but also because people consume it to understand themselves. I think the unfolding of 2016 made the music of the year important, but I also think that 2016 was an important year for music.


New Peer Review Paradigms for Astrobiology and Origin of Life

Scientific publishing is changing. More scientific papers are being published now than at any other time in history. The digital era is facilitating new publishing practices, such as preprint servers and Open Access journals. At the same time, there is a growing concern among scientists about integrity and equity of the peer review process as we know it. These new practices present the scientific community with an array of new opportunities, which may help revitalize peer review and provide new means of scientific discourse. The astrobiology and origin of life (OoL) communities, as a young and dynamic fields of inquiry, can and should lead the broader scientificc community in seizing these new opportunities by creating and utilizing new collaborative platforms.

The process of peer review is essential for scientific progress. For well over three hundred years the mechanisms used to instantiate some of the simple ideals of peer review have remained largely unchanged, until now. The digital era is making new mechanisms of scientific discourse possible by allowing information and methodology to be readily disseminated. A large new generation of technologically literate and globally connected scientists are demanding science move faster to keep pace with a dynamic world [3]. Meanwhile, one of the key tenets of peer review, the reproduciblity of results, seems to be on the decline [2], and concomitantly the number of retractions in scientific literature is on the rise [8]. I will outline some of the key features of these issues and argue that they are all symptoms of a peer review process which could be improved by utilizing new collaborative platforms.

What are the problems in peer review?
Scientific publications take a long time to process. The typical time between first submission and publication in traditional journals can be months or years, depending on the publisher. On its face, this is not a huge problem, peer review is a key feature of scientific progress and the means by which it is executed should be diligent, and through, which inevitably takes time. However, often manuscripts are rejected not because the the results presented are invalid, but rather because the writing could be clarified, because the editor of the journal finds it to be ill-placed, or because reviewers require more experiments to validate the results. In many of these situations, its preferable to have faster, direct communication with the peer reviewers, rather than slower anonymized communication mediated by a busy editor. Presently, there are few tools to facilitate the kind rapid, public, and collaborative review process which could improve the clarity and quality of manuscripts prior to their submission to journals.


In spite of the lengthy review process involved in most journal submissions, the number of retractions in scientific literature is rising [8]. This could be due to a number of factors, including the possibility that invalid results are being found and refuted faster, which on it’s face is a good thing for the peer review process. However a recent survey of more than 1500 scientists by Nature, reported that 90% agreed there is a reproducibility crisis in modern  science, and more than half agreed that the crisis is “significant.” Across every discipline surveyed, more than half of the respondents reported failing to reproduce someone else’s work, but only 23% reported that they had tried to publish failed reproduction attempts and only 13% have successfully published the failure to reproduce those results. When asked about the factors which might be contributing to these issues, respondents overwhelming agreed that pressure to publish is an important factor, among other reasons such as insufficient peer review, poor instruction, and insufficient statistical tests.

An overwhelming pressure to publish is a common complaint among contemporary scientists [7] and particularly among early career scientists [1]. On of the main driving factors behind this pressure is that citation statistics (so called bibliometrics) are the one of the only means available to quantitatively evaluate scientists [6]. The rise of interdisciplinary science [9] confounds this problem, because patterns of publication vary between disciplines and fields, making potential candidates for positions or grants difficult to evaluate and compare. Worse, bibliometrics are typically blind to the peer review process. A scientist who is publishing less of their own work, but reproducing others results, and/or giving clear, detailed and constructive feedback to other scientists may be contributing much more to their field than a scientist who is quickly publishing many articles, but providing limited, non-constructive, or incoherent feedback in his reviews. Bibliometrics alone would rank the latter above the former, in terms of contribution to their field.

Bibliometrics identify a scientist’s reputation with the number of citations their work has received. Accordingly scientists typically attempt to publish in the highest impact journals possible. While high impact journals are perceived as prestigious in the scientific community they often limit access to scientists work by requiring subscriptions. These subscriptions typically cost upwards of $200 dollars per year per journal and typically individuals who do not have access via their institution cannot access work published in these journals. There are alternatives, so called, Open Access (OA) journals. These journals do not charge users to read content, instead they charge authors to have their work published. Publishing a single OA article can cost upward of $1200 dollars. While many authors prefer to have their work accessible to available to all, the financial burden of OA publishing is often not justifiable for many scientists, particularly graduate students and post-docs.

What are the opportunities for peer review?
In physics, mathematics and computer science, the use of the preprint server is ubiquitous. acts as a supplement to traditional peer review mechanisms in those fields. It hosts scientific manuscripts which have not yet been subjected to complete peer review, which are typically uploaded at the same time as their submission to a scientific journal, these manuscripts are called preprints. By hosting work in progress, (and other preprint servers) facilitate rapid communication of new results, discoveries and ideas. In contrast to scientific journals, which take months or years to publish newly submitted work, preprint servers take only a matter of days. While many other fields have begun to adopt preprint servers [5], their use is by no means ubiquitous in the OoL and astrobiology communities. This provides an opportunity for those communities, we can make a push (following suit from traditional biology) to make use of preprint servers to improve the availability of our science immediately.

One of the key benefits of preprints, is the ability for scientists to get feedback on their work faster than through journal mediated peer review. In order for that to be productive, scientists must be willing to dedicate more of their time to reading, reviewing, critiquing and improving the work of their peers. Most scientists are already doing this by reviewing journal submissions, but adopting preprints means that scientists will have to go out of their way to improve the work of someone whom they might be directly competing with. This might appear as a naive waste of time for early career scientists, struggling to get ahead. However, in the face of rising pressure to publish, it is becoming clear that scientists are becoming increasingly collaborative [7]. This willingness appears to be based on the fact that they can still receive credit for small contributions, allowing them to gain more publications, which are the gold standard of scientist’s reputations. This suggests that if scientists were to receive some kind of useful credit for reviewing the work of others, in a meaningful and constructive manner, they would make time to do so. This provides an important opportunity to improve the quality of peer review. If we can establish a means to credit reviewers (in an easily quantifiable manner) for their contributions, everyone will benefit.

While preprints are not always peer reviewed, they provide free and open access to articles which have been subjected to peer reviewed. This is because most journals, including Nature, Science and PNAS, allow submissions of articles which have been posted to preprint servers. There is a demand among scientists for easier access to all types of articles. This demand is evidenced by the wide spread use of the scientific piracy cite Sci-Hub [4]. It appears that even academics with institutional access are using the the site, which blatantly violates a number of intellectual property policies, because it is easier, more convenient and more intuitive than accessing the articles through publishers. While this is clearly a missed opportunity on the part of journals and publishers, it demonstrates that scientists are craving access to content quickly and conveniently. Preprints provide access in this manner, and they are easily searchable via common search engines such as Google Scholar, which helps authors increase (legal) visibility of their work, without having to foot the bill for an open access journal.

How can we exploit the opportunities to solve the problems?
It is becoming clear that there is a demand for new modes of scientific discourse. The current journal-centric paradigm in place is slowing down peer review and limiting access to scientific content, while devaluing diligent and constructive peer review. is a clear example of the power and utility of preprints, but widespread adoption of preprints alone will not solve some of these key problems. Scientists need to be incentivised to critique and improve other people’s work. In order to do that, scientific articles must be easily accessible, and scientists must be able to communicate rapidly and freely.


I suggest that astrobiologists and origin of life researchers work together to develop a new collaborative platform (perhaps on which will allow users post preprints and interact with peers, by reviewing and reproducing results. This new platform would provide easy access to preprints, as well as new metrics to evaluate scientists. A reputation system would need to be put in place to track the contribution of individual scientists. Such a system could identify users area’s of expertise (from the perspective of the community), as well as the number of experiments and results they’ve confirmed and the quality of the comments and suggestions they’ve provided to their peers. A reputation system of this type could extend beyond experiments and articles to include contributions to databases, and software platforms, allowing for enhanced collaboration throughout all aspects of scientific work [6].


This platform could be an extension of SAGANet to accommodate a reputation system. Stack Exchange, is a very successful platform which hosts question and answer forums, and utilizes an effective reputation reward system. While the goals of a peer review platform would be very different from Stack Exchange, we could learn a lot from the lessons of that site. The preprints could either be hosted on a new Astrobiology/OoL server as well, or they could be hosted on an existing preprint server. has a developing API and a quantitative-biology category, which might appeal to the physical scientists already in using arxiv. While this technical details of this endeavor will be undoubtedly complicated, the tools and techniques are out there.


Beyond any technical or logistic challenges presented by establishing this type of platform, there will be a huge social challenge. We will have to convince scientists to reveal their work prior to publication, which is still a major hurdle in many fields. We will have to convince other scientists to review and recreate others experiments whenever possible. This will not be easy. However, it won’t require the support of any institution, university, or funding agency and it won’t depend on the support of senior researchers. It will depend on the support of graduate students and post docs who are passionate about doing the best science they can. Astrobiology and origin of life research investigates some of the deepest, most conceptually challenging questions in science. A new collaborative platform for those communities would allow scientists in those fields to hold themselves to the highest possible standards of peer review. It would allow those fields to be the most open, equitable, and accessible in the scientific community. Peer review has allowed science to achieve the unimaginable year after year. In order to tackle the hardest problems of our time, we need to make peer review work better, and the tools to do it a feasible, we just need to build them.

If you are interested in making these improvements in our fields, please let me know. Review this piece of writing, tell me what you think, comment below, or contact me directly at I look forward to you’re comments and criticism.
[1] Measures of success. Science, 352(6281):28-30, mar 2016.
[2] M. Baker. 1, 500 scientists lift the lid on reproducibility. Nature, 533(7604):452-454,
may 2016.
[3] N. Bhalla. Has the time come for preprints in biology? Molecular biology of the cell,
27(8):1185-1187, 2016.
[4] J. Bohannon. Who’s downloading pirated papers? everyone. Science, apr 2016.
[5] J. Inglis and R. Sever. biorxiv: a progress report., 2016.
[6] H. Piwowar. Altmetrics: Value all research products. Nature, 493(7431):159-159, 2013.
[7] A. Plume and D. van Weijen. Publish or perish: The rise of the fractional author.
[8] R. G. Steen, A. Casadevall, and F. C. Fang. Why has the number of scientic retractions
increased? PloS one, 8(7):e68397, 2013.
[9] R. Werner. The focus on bibliometrics makes papers less useful. Nature, 517(7534):245-
245, jan 2015.

Solid, Liquid, Gas…. Life.

“What we use to call biophysics often is just physics of biological material or the dynamics of the material processes involved. …. I am asking for a ‘physics of biology’.” -Manfred Eigen 2000.

What is a Phase of Matter?

In school, we are taught about states of matter: solid, liquid and gas. Normally people graduate high school with the impression that those represent the only examples of states of matter- you may remember hearing something about a fourth phase called plasma, but that’s usually discarded as ‘something different.’ For almost all of human experience, all matter can typically fall into one of these three phases; solid, liquid or gas. It might seem reasonable to suggest these are the only states of matter possible and for a long time physics would have agreed.

The study of phases of matter is a strange narrative. No one can be credited with discovering the existence of phases, as even the most ancient thinkers were aware of the differences between liquids and solids. It was not until 1911 that a new phase of matter was discovered. Heike Hamerlingh Onnes discovered superconductivity in mercury while working at the University of Leiden. His experimental results were so surprising that they wouldn’t be explained theoretically until 1965 by the BCS theory. With Onnes discovery the flood gates were open, over the next few decades dozens of new phases of matter would be discovered. Some famous examples include Bose-Einstein condensates, and superfluids. Why don’t we usually hear about these other forms of matter? Well even the physicists and chemist who study these strange configurations of matter describe them as “exotic.” They are not the kind of thing you would come across in your day to day life, new states of matter are often cooked up in labs and will never have practical applications.

So what is a phase of matter? If there were only three different types it might not be such a difficult question. We could simply enumerate all of the possible states of matter, describe the differences between the different phases and be done with it. However, as more and more distinct states of matter are being discovered it’s not clear that we should expect the list of all possible states of matter to be finite, and further, there is no reason to expect that all states of matter are mutually exclusive. For example, carbon nanotubes can be simultaneously solid and superconducting.  This fact doesn’t fit into the high school narrative we were taught, so it might seem surprising. You never come across something that is both solid and liquid or liquid and gas in your day to day life. But those two descriptions (of carbon nanotubes) are referring to fundamentally different properties of carbon nanotubes, the solidity describes the fact that carbon nanotubes will not change shape to fit into their container, while the superconductivity refers to the nanotubes electrical resistivity (or lack thereof).

Phases of matter are distinguished by their emergent properties; Emergent properties are qualities which come about not as a result of fundamental characteristics of individual components, but rather as a manifestation of the interactions between many components. For example, a single molecule of H2O is not water, ice, or vapor; it’s just two hydrogen molecules and one oxygen molecule bound together. Single molecules have no properties that can be identified as their ‘phase’. However, take many molecules of H2O and new measurable characteristics emerge, that do not exist for individual molecules. These characteristics are the manifestation of interactions between the constituent parts of a given material system. They cannot be assigned to any particular atom, molecule or component, but they are still an undeniable property of the whole. For example, water at room temperature is liquid and a property of liquids is that they take the shape of any container they are in. This doesn’t mean that the individual water molecules have changed at all, it’s just that they are free to be rearranged in any arbitrary manner.

To the physicist’s eye, physical phases are a way of carving up a large parameter space into regions wherein the bulk properties of the system are essentially uniform, or change in a uniform, continuous and predictable manner. As a result, most of the theoretical work on phases has not been dedicated to understanding particular emergent properties, but rather on phase transitions. Phase transitions are the dividing lines between distinct states of the system. The figure below is a phase diagram of water. Here the parameter space is defined by all the possible combinations of temperature and pressure for a fixed volume of H2O. The phase transitions are the lines that separate the liquid water, ice, and water vapor.



Phase transitions have typically been identified through experiment, rather than predicted theoretically. This is in part due to the very nature of phase transitions; they occur in principle because the properties of a particular system are not analytic at the transition. Analyticity is a subtle mathematical concept but the key implication is that it is impossible to extrapolate the known dynamics across the transition to make predictions about the other phase. So for the most part, the entire study of phase transitions is phenomological in nature.

Phases are really just areas of a large parameter space where emergent (not fundamental) laws are consistent, and uniform. They are a way for scientists to organize our knowledge based on the relevant dynamics for particular parameters. They allow us to divide up a single system into many distinct classes of dynamics.

What is Life?

We all probably have an intuitive idea of what life is. It is usually fairly obvious to us what types of things can be alive, and which types of things can never be alive (perhaps with a few exceptions such as viruses). Unfortunately translating these notions into a single definition is extremely difficult. Despite our intuition about what life is and what life is not, there does not seem to be much of a consensus among scientists. Many researchers choose to ignore the question, opting for a “I know it when I see it,” approach. This can be successful for biologists or biochemists whom, are typically interested in elucidating the mechanisms that life uses to function. However the problem cannot be avoided if we want to understand the origin of life. We cannot say when or how something started if we cannot define the very process that is starting.

We don’t have a complete definition of life, but it is not for lack of trying. In fact, there are many working definitions out there. They vary from discipline to discipline and from person to person, and they are usually tailored with a particular purpose in mind. NASA’s working definition of life is an excellent example of this. NASA’s definition is: “a self-sustaining chemical system capable of Darwinian evolution.” This single sentence seems to encapsulate a lot of the key characteristics that we associate with life as we know it. However, this definition represents a fundamental problem which many astrobiologists struggle with. This problem is best explained with an example: Robert Forward has an excellent science fiction book in which ‘life’ begins on the surface of a neutron star. In the novel, this process has all of the characteristics we associate with life as we know it except that it is not chemical! All the metabolic and genetic processes are built on nuclear interactions rather than chemical ones. Clearly this might be a bit of a stretch, but it illustrates a point which is usually over looked: we often think of life as something that exists in parallel with its physical embodiment.

Scientists and philosophers who study emergence call this functional equivalence. When new emergent phenomena are observed on a particular macroscopic scale, they are typically independent of the lower level microscopic details. The microscopic system can be completely different and still lead to the same macroscopic phenomena. We like to imagine that chemical systems are just one class of dynamical systems in which life could be instantiated, and that there are potentially many others out there, yet to be discovered. This is very similar to the idea of artificial intelligence, the notion that we could create a computer which would experience subjective reality in the same way our organic brains do.

Life as a state of matter

Life uses physical processes, and therefore is ultimately constrained by its physical embodiment, but the property of being alive is a higher level description of matter. Single molecules are not alive, and yet all living things we know of are made of molecules. What this suggests, is that life is an emergent property of matter. We also suspect that life is not a property that is found in EVERY physical system. If we take life as we know it as an example, it is clear that there are certain conditions under which matter can be alive, and certain conditions where it cannot be alive.  Life is everywhere on earth, but hasn’t been found on the moon for example. Its possible non-chemical life could exist in places where chemical life couldn’t (like on the surface of a neutron star, if we take Forward’s example). Any particular type of life will presumably have a particular range of environments where it can exist. So now for a given physical system, we can imagine a very large parameter space, similar to the one above for water, except that it may have many more dimensions than just 2. We can divide this space into regions where the system could be alive, and regions where the system could never be alive.

So we have regions of a parameter space which we can identify based on their emergent properties, namely we can separate regions of this space based on whether or not the system is alive in that area. We’ve just defined a phase. But what should characterize the living state? What kind of phase diagram would such a thing be placed on? Could we ever define the boundaries? Does this actually get us anything new? Or are we still stuck with all the problems of defining life?

Researchers have been using the notion of the living state as a unifying theme to improve our understanding the origin of life and the biosphere as a whole. The prominent mathematical biologist Martin Nowak has demonstrated that tuning another parameter can drive a system from what he calls ‘pre-life’ to a different phase which he calls ‘life’ (based on the possibility of replication and evolution in that phase). The Nobel laureate chemist Manfred Eigen has hypothesized that the on-set of natural selection is a phase transition. Physicist Eric Smith has suggested that many of the metabolisms we find here on earth represents an undiscovered class of non-equilibrium state, the onset of which, would have a lot in common with other phase transitions.

So it is possible then that life is a state of matter. What are the characteristics of this state? What parameter space will most clearly demonstrate the existence of this state? We don’t know the answers to these kinds of questions yet but this idea gives scientists a framework to begin working with. In particular it provides an excellent theoretical framework to begin understanding the origin of life as a phase transition. By creating and characterizing this theoretical phase space, researchers can begin to separate out specific and tractable questions to address the origin of life on earth and elsewhere. Further, this framework allows researchers to make progress on the origin of life without ever having to define it explicitly. As new definitions are proposed they can be placed inside this phase diagram, allowing us to understand how those definitions are related to each other. Life as a state of matter represents a novel and promising framework to understand biology and its origin.

Who’s Paying for Peer Review?

This piece was originally printed in the Tempe Normal Noise magazine at ASU. It presents a small part of a debate going on within contemporary science. Discussions are being held in many different mediums, from public press, to workshops and conferences, and publications. Here, some opinions in this debate are voiced through a conversation between fictionalized scientists, all of whom are in the early stages of their careers. The conference, studies and numbers mentioned in this piece are real. While this specific conversation never happened in Tempe, it could have easily taken place at any scientific conference in the world. 


It was 11:30 pm, on a Thursday. “I should probably be heading home soon anyways,” Phil thought, as he order another Coffee Kolsch at the bar. He had another busy day at the conference tomorrow. The Conference on Complex Systems, or the CCS, had descended onto the industrial southwest corner of Tempe, bringing with it more than 600 scientists from all over the world. While 600 researchers would be a barely noticeable blip on the scale of ASU’s 83,000 enrolled students, the intellectual impact of the conference was difficult to understate. Complex systems science represents a huge conglomerate of new research directions from many disparate fields.  With keynote talks about the nature of social norms, the connection between the global food distribution network and the Arab spring, and recent progress towards a predictive theory of evolutionary biology, the conference was interdisciplinary to the core. Complex systems is a field of science defined in the negative, it strives to be what traditional scientific disciplines cannot be, it strives to remove the borders between schools of thought, to integrate knowledge from seemingly unrelated fields into a more comprehensive view of the world. It has ambitious goals.

The patio at Casey Moore’s was surprisingly deserted for a Thursday night. Phil took a second to enjoy the cool breeze as he stepped back outside. It was the first of October, and it felt like the first night of fall, maybe the summer was finally ending.  When he got back to the table  his friends and fellow conference goers were debating the quality of American craft brews compared to traditional Belgian ales. The collective thoughts of the conversation meandered a bit, there was a brief mention of the refugee crisis in Europe, and some questions about what exactly Donald Trump means for American politics. Eventually a topic of substance emerged: the pervading notion that you must “publish or perish” in contemporary science. It is an issue that can be seen in almost every branch of academia, but in science it is particularly prevalent, or so it seemed to this group of scientists. Doing “good” science is really difficult, and a single scientific project, could, and sometimes should, take years to complete. For an early career scientist, her/his publication count is perceived not only as a measure of her/his success but also as a measure of her/his value to the scientific community. Despite the general consensus that publication count is not the only measure of success for a scientist, most professional opportunities, from post-doctoral positions, fellowships and grants, to tenure applications still depend heavily on it.

“It’s all bullshit anyways,” said Roth, “look at █████ ██████, he publishes a bunch of noise, really fast, and everyone thinks he’s brilliant because one in twenty of his publications is good.” Roth was making a sound point. Not only does a publication count ignore the quality of the science being done, it isn’t even necessarily a good representation of how much work was done by a particular scientist. Publications typically have multiple authors and there’s no universal metric for how much work you need to do in order to be listed as an author on a paper. This means that a well-connected scientist can contribute to many different projects and be listed as an author on all of them, even if she/he didn’t do much at all for the project. This wasn’t too much of an issue though. The real problem, the one Roth was concerned with, was more practical. When the funding sources care more about quantity of publications, rather than quality of results, there’s a massive incentive to spread good results over as many publications as possible. This ultimately dilutes the content of any single journal article while simultaneously creating many more articles to read.

Recently, Physical Review E (the journal specifically designated for statistical and nonlinear physics) announced that it has published more than 50,000 articles since its inception in 1993. It begs the question, what does that number mean for human understanding? In 20 years, a single journal (of which there are many more) has generated more content than any human could read and comprehend in a lifetime—let alone over the course of a graduate program.  How did science end up with this kind of system?

The first scientific journals were formed over 350 years ago. Before that time scientific knowledge was embodied in either massive tomes hidden in university libraries or in personal letters written between colleagues. At the time of their inception, and for a long time after that, scientific journals were extremely impressive endeavors. Collecting manuscripts from scientists all over the world (i.e. Europe[1]), redistributing them for peer review, and then publishing and distributing the results. Each step in the process was no easy task in the days before steam engines, railways or automobiles. As such, these publishers charged a steep price for access to their knowledge distribution network. Depending on the field, publishers today can charge anywhere between a few hundred dollars to upwards of 5000 dollars for a single title. Before the advent of the internet, this kind of pricing might not have seemed so extreme. In fact, for serious researchers and institutions, it was good deal. However, you would have expected that scientific publishing experienced a massive restructuring as the internet emerged and began contributing to almost every human endeavor. After all, scientists were some of the earliest adopters of the Internet[2].

Unfortunately, the basic business model from 300 years ago is still alive and well in the information age. The premise is this: scientists work diligently on their area of expertise, when they find results that are new, exciting, contradictory or otherwise interesting, they compose a manuscript, describing the experiment, and results. This manuscript is then submitted to a journal which sends it out to other specialists in that field. These peer specialists review the article based on its clarity, scientific merit, and importantly, the perceived relevance to the larger field. If the article is deemed clear, scientifically sound, and interesting, it is accepted and published in the journal. Often articles will be revised several times before being accepted at a given journal. Once a piece is accepted, it becomes the intellectual property of the publisher, and other scientists must pay the publisher for access to those results. At no point in this process does the publisher pay any of the scientists involved. In any other industry this model would not only fail, it wouldn’t even make it past to board room of any respectable company.

You might think, perhaps these scientific publishers are rather generous. They perform all these essential functions for the scientific community and only seek to recoup their costs. Unfortunately that is not the case. According to a study published in PLoS One, the largest scientific publisher, Reed-Elsiver, makes a profit margin of about 39%. That is a higher margin  than the Industrial & Commercial Bank of China (29%) and Hyundai Motors (10%), in fact, it is on par with Pfizer (42%). Those companies represent the highest margin organizations in the banking, automobile, and drug industries respectively. This is a company which sells access to content they don’t create, which has outsourced their quality control to their users, and which has no marginal cost! The total cost of sharing one .pdf is the same whether it is downloaded 10 times or 10,000,000 times. In the 21st century, journals are still able to charge a premium for access to their articles. Profits margins like this are driven in predominantly by the fact that scientists are not the customers in this model, university libraries are. University and institution libraries are essentially captive audiences in this system. Researchers desperately need access to current publications in order to make progress, which is a big reason to associate with an institution in the first place. University libraries don’t make decisions about which journals they purchase based on traditional supply and demand, but instead, the decisions are made based on budget allocations, which are typically independent of both demand and supply. “Publishing has become too large of a process,” complained Louie, “one of the original goals of journals publishers, besides implementing peer review, was to sort the relevant discoveries out of all the unimportant ones. But with the number of sub fields these days, the opinions of editors are less and less important. Particularly for interdisciplinary scientists, the opinion of their peers should matter more than editors, isn’t that the point of peer review?”

“One of the motivations for the whole Open Access push was to remove that responsibility from the publishers,” responded Sonya, “other scientists, not journal editors, should be deciding what is and isn’t relevant to the field.” Open Access is a model pioneered by PLoS (Public Library of Science) an online only journal. This means that all the articles in PLoS are open to the public, for free. The catch here is that PLoS (and other Open Access journals) are operating on a pay-to-play model. Rather than charging users to view their content, they charge scientists to publish their work. PLoS One has quickly become one of the biggest journals in the world, but their model is still extremely controversial. Importantly PLoS does not judge articles based on their perceived importance, only on their academic merit, and clarity. PLoS argues that this helps create an unbiased forum, where the scientific community, not the publishers, determine what work is important and what is not. Others, like Harvard biologist John Bohannon, argue that this model is profit driven, that accepting more, less significant manuscripts is only a means to make more money and that it ultimately slows down progress. “I definitely think that interdisciplinary fields should get behind a lot of the open access ideas,” said Rosco, “ but it’s crazy to expect early career scientists to pay $1500 just to publish a paper! Considering you might have 3-5 before you graduate, how is anyone suppose to afford that?”

“It’s not just about us though,” retorted Sandy, “there’s a moral aspect to this too. Open Access makes sure that everyone has access to information. That might not be important for physics or geology, but for medicine and science that can affect public policy, it’s really critical that people have access to the most up-to-date science.” While the Open Access model has been heralded by some, like the Cambridge mathematician Timothy Growler[3], as an appropriate model for the information age, it has some serious legitimacy issues. Despite being one of the largest journals in the world, PLoS publications are still seen as suspect by some researchers. Early career scientists often worry about having too many PLoS publications on their CV. In some specializations, PLoS is seen as place where papers are submitted after being rejected at most traditional outlets. To make matters worse, in 2013 John Bohannon published a study in Science (an academic publisher which is not open access) showing that almost half of open access journals had little to no serious peer review process. The study generated papers which seemed consistent, but were actually flawed in very obvious ways. The papers were submitted to 307 open access journals all over the world, and were accepted in 147. Typically the papers were accepted without revision or comments from editors. While many larger journals rejected the paper outright—PLoS included—the study demonstrated the real danger in creating journals where authors pay for access, rather than readers.

Using publication records to measure academic success has worked for over 300 years, but it seems to be slowly taking control out of the hands of scientists and putting it instead in the hands of publishers. The success of publishers as corporations doesn’t contradict the success of science; after all, technology and human understanding have advanced remarkably in those three centuries, but it doesn’t seem to be the right model in the age of Wikipedia. It doesn’t make sense for contemporary science to be so beholden to organizations, which don’t seem necessary in the information age. In the last 40 years, there has been a dramatic consolidation of academic publishers, the top five largest publishers in the natural and medical sciences now control over 50% of the yearly citations, compared with less than 20% in 1970, according to “The Oligopoly of Academic Publishers in the Digital Era,” a study published in PLoS. This has occurred coincident with increased profit margins for those publishers and rising pressure on scientists to publish. “It’s messed up when Nobel Laureates like Paul Higgs, admit that they wouldn’t have been able to survive their early careers in today’s academic climate,” said Louie, “he thinks the pressure to constantly publish would’ve prevented him from coming up with the Higgs field!”

“Does science even proceed on journal publications?” Phil asked to group. Everyone glanced around, some shoulders shrugged. He wasn’t even exactly sure what he meant by that question, but it seemed that any metric for the success and impact of a scientist should be connected in a clear way to the advance of scientific understanding. “Science proceeds by tutorials!” exclaimed Rosco, “I don’t think anyone understood maximum entropy or information theory until Simon gave that tutorial!” He was referencing a tutorial earlier in the week of the conference, which helped researchers from all different fields understand the core concepts of information theory and how simple aspects of it can be applied to almost any field.  Roth chimed in, “Even if we all agree publications are not the right benchmark, what else are we supposed to use? Twitter followers?” Phil sighed and replied, “I’m sick of that being the end of the conversation! Computer science uses ‘invited talks’[4] as a metric, why don’t other fields?”

“Computer science does that because it is a very young field, relatively speaking, and because historically it has been about developing algorithms, the proof was always in the software, a paper wasn’t necessary,” said Sonya, “it wouldn’t change anything, though. Computer science conferences have all the same problems as publishers in other fields.”

“Well then can’t we use some kind of combination of papers, talks, and public engagement?” responded Phil

“It’s pointless to be mad about this,” said Louie, “people who have tenure decide these sorts of things, and the people with tenure got there by getting published, so they won’t want to change this system, even if it’s broken.”

Phil slammed his fist down on the fake wood of the table, “You should be mad about this! If you think something is wrong about science, then you should be mad about it!” There was a brief silence. Everyone stared at him. Sonya and Louie started laughing, “Wow man! You’re so passionate, I love it!” said Louie. Phil took a deep breath and looked over his shoulder, he needed to be careful, he had seen people kicked out of this bar for less. “My point is just that ‘publish or perish’ disproportionately affects interdisciplinary researchers, who must spend more time synthesizing facts, and concepts rather than generating new ones. If we, as interdisciplinary researchers, think there is a better way to go about doing interdisciplinary science, we should at least be talking about it, if not yelling about it.”

“That’s all conditioned on the fact that we are interdisciplinary researchers,” said Roth, “Complexity science has its own legitimacy issues. We need to prove it’s worth something before we go all crazy trying to reform the entire scientific community.” Phil kept his mouth shut; he completely disagreed with everything Roth had just said, but talking about it more would get him so rallied up that he could get kicked out of the bar. He finished his beer and stood up, “Anyone want another round?” he asked as he walked towards the bar. Sandy raised her hand, “What are you drinking?” Phil asked. “I forget, just order me an IPA, they are better here than in London.” He walked to the bar and ordered two Lagunitas, thinking that she should taste a California classic before she left the States. By the time he sat back down it was 1:40 a.m., and the conversation had meandered away, Sandy and Roth were now pouring over the beauty of the AdS/CFT duality[5].

Phil didn’t bring up any other issues with science that night, but inside he was still fuming. He couldn’t understand how anyone could see these issues and not be mad about them. 2 a.m. rolled around, and Sandy needed help finishing her beer as they were herded out of the bar. Phil caught an Uber home from there. He considered trying to carry a more normal conversation with the driver, but it was beyond him. He reflected on how insane science seemed. How is it that science distinguished itself from other human endeavors? How different could it really be from art, literature, or business? After all it was a flawed system, built out of the work flawed individuals. On the face, it seemed absurd that any human endeavor would ever attempt to make objective claims about reality. Peer review isn’t perfect and it never has been. On the other hand, he thought as he pulled out his cell phone, science really seems be different than other schools of thought. The people who put a man on the moon, and the people looking for cures for cancer don’t consider themselves artists, or lawyers while they are doing that kind of work.The current system, in spite of all of it’s flaws, has allowed the world to progress in unprecedented ways. People today are healthier, more productive, and more connected than any other point in human history, in large part because of scientific and technological breakthroughs which would’ve been impossible without the peer review process. Phil wondered if there was actually a better way to proceed with science. Was it possible that all the issues he and his peers had with the current model were simply the unavoidable cost of any system of peer review?

[1] It is impossible to separate the history of science from the history of western society.

[2] In fact, the world wide web as we know it was pioneered by Tim Berners-Lee while working for CERN, the same organization which discovered the Higgs Boson a couple years ago.

[3] Growler has famously lead a boycott against Elsiver Publishing Group, which is still ongoing. Most of his supporters are mathematicians. Elsiver has never owned had a prestigious mathematical journal.

[4] Invited talks are typically keynote presentations at conferences or workshops, where the speaker has been specifically invited by the organizers.

[5] AdS/CFT duality is a unifying result from the last decade that connected quantum field theory to string theory.

My view of the absurd

The title of this blog “accepting the absurd,” is a moniker I’ve used in several places, so I’d thought I would use this first post to explain why I love this phrase, and what it means to me.

“The absurd” is a philosophical concept at the heart of absurdism, which is a branch of philosophical thought, closely related to existentialism and nihilism. It is a reference to “to the conflict between (1) the human tendency to seek inherent value and meaning in life and (2) the human inability to find any. In this context absurd does not mean ‘logically impossible’, but rather ‘humanly impossible’. ” (Wikipedia) Absurdism was largely developed by Soren Kierkegaard and Albert Camus. I’ve never read any Kierkegaard directly -or much else beyond surfing Wikipedia pages- but I enjoy Camus’ version and interpretation of the Absurd most.

For Camus “the freedom of humans is thus established in a human’s natural ability and opportunity to create their own meaning and purpose; to decide (or think) for him- or herself. The individual becomes the most precious unit of existence, representing a set of unique ideals that can be characterized as an entire universe in its own right.” (Wikipedia) I find this particularly inspiring for several reasons. First, it elevates the conscious existence to a fundamental role in the world. It doesn’t admit freedom as a permission granted by a government or by a society. Instead it defines freedom as a natural human ability. Freedom is in what you make of your life, the degree of someone’s freedom is not constrained by the world but rather by an individual’s own creativity, and in how they find meaning and purpose in their life.

The notion that each individual is a universe in their own right is an idea I am very affectionate of.  People are so vastly complex, that it is impossible to truly understand another individual (and perhaps yourself), yet we are a naturally social species. Despite the fact that we can never truly leave the subjective bubbles of our existence, we can interact in profound ways. I find my interactions with other individuals to be the most inspiring aspect of my life. Whether the interactions are intellectual, social, emotional, or physical, the fact that two existences can come together and take part in something that is contained within neither is nothing short of beautiful.

For Camus, freedom, and beauty in life both came from accepting the absurd as a fact of existence, and not accepting by resigning to suicide, but rather by revolting against the indifference of the universe, and finding passion, by living life wholeheartedly.

I grew up without ever explicitly thinking that there was an intrinsic purpose in life, although I never considered the alternative. For most of my life I thought that a purpose would present itself to me. I thought that if I waited patiently I wouldn’t have to look, that the ultimate goal of my life would be obvious. In a sense, I suppose that worked out for me. I don’t remember when I first thought or perhaps realized an objective purpose wasn’t something that I would discover. Whenever it did occur to me, I didn’t seriously consider suicide (philosophical or otherwise), in the way of Camus. However, the struggle Camus presented in The Myth of Sisyphus has helped me formalize that way in which I think about my life. I try to accept the absurd in the way I lead my life, I (unapologetically) take advantage the freedoms I’m afforded in my life to see as much beauty in this world as I can. I try to meet as many people as I can, I try to really learn as much about other forms of creativity as possible.

For me the most troubling aspect of life is a different kind of absurdity. That is the human tendency to sympathize with others and the human inability to truly understand another individual.

If you are interested in Camus’ Absurdism, this video is insightful.

My interest in the absurd was sparked in high school by this Streetlight Manifesto track, but it might not mean as much to you.