Category Archives: Paper
Having a paper rejected is one of the certainties of academic life. While there are some strategies to decrease the probability of facing a rejection, today I want to focus on my tips to deal with them—particularly for the benefit of younger scientists.
There are two broad kinds of rejections: Desk Rejections and Rejections from reviewers. In any case, the best advice is never to take action after receiving the dreaded rejection letter. Take a day or two, then react accordingly with a cooler head. Remember, this isn’t about you it’s hard not to make it personal but trust me it isn’t.
The first kind, desk rejections, are provided directly from the chief or associated editors of the journal to which you submitted your work. They tend to be quick and rather uninformative except for maybe the incompatibility—to put it nicely—of your work with the scope of the journal. These are also sometimes the hardest to face since they make you feel your work is simply not good enough to be published; but they’re also the quickest and in the publish-or-perish scheme of things, time is key. After getting a desk rejection, if no other input is given, just try again; one tip—though not infallible—to chose a proper journal is to look at which journals are you citing in your own work and chose one with the highest frequency. Sometimes, editors might offer a transfer to another journal from the same publishing house; my advice is always say yes to transfers: the submission is made for you by the editorial staff, it sort of becomes recommended between the involved editors, and expedites the start-again process. Of course, a transfer does not mean you’re manuscript will get accepted but whenever offered there is a good chance the first editor thinks your work should be kept inside their editorial instead of risking you going to another publishing house. Appealing to a desk rejection is highly discouraged since it practically never works. Sure, you may think the editor will kick himself in the rear once you get the Nobel prize but telling them so, particularly in a colorful language, will not make them change their minds.
Rejections after peer review are trickier. If your manuscript went up to peer review, it means the editors in charge of it thought your work is publishable but of course it needs to be looked at by experts to make sure it was done in the right way with all or most things covered (you know what they say, two heads are better than one, try three!). Now, this kind of rejection takes longer, usually two or three weeks—sometimes even longer—but all things being fair, polite, and objective, they are also the most informative. Reviewers will try to find holes in your logic, flaws in your research, and when they find them they will not hold back their thoughts; you’re in for the hard truth. So of course this kind of rejection is also hard to take, makes you feel again like your work is not worthy, that you’re not worthy as a scientist. But the big advantage here is you now have a blueprint of things to fix in your manuscript: a set of experiments are missing? run them, key literature wasn’t cited? read it and cite it appropriately. Take peer review objectively but never dismiss it by trying to just go and submit it again to a different journal as is, for chances are you’ll get some of the same reviewers, and even if you don’t, it’s unethical to dismiss the advice of peers, they are your peers in the end, not your bosses but your peers, don’t loose sight of it. Also, it’s very frustrating for reviewers to find that authors managed to get published without paying the slightest attention to their suggestions. Appealing a peer review rejection is hard but doable and then you have to put on a scale what is it that you value the most: your paper in its original condition being published in that specific journal or fixing it and start again. An appeal upon a flat rejection is hardly ever won but it may well establish a conversation with other scientists (the referees) about their point of view on your work, just don’t think you’ve made instant buddies who will now coach you through academic life.
The peer review system is far from perfect, but if done properly it is still the best thing we’ve got. Some other alternatives are being tested nowadays to reduce biases like open reviews signed and published by reviewers themselves; double and even triple blind peer review (in the latter not even the editor knows the identities of authors or reviewers) but until proven useful we have to largely cope and adapt to single blind peer review (just play nice, people). In some instances the dreaded third reviewer appears, and even a fourth and a fifth. Since there are no written laws and I’m not aware of any journal specifying the number of referees to be involved in the handling of a manuscript there may be varied opinions among reviewers, so different as from ranging from accept to reject. This may be due to the editor thinking one or more of the reviewers didn’t do their job properly (in either direction) and then brings another one to sort of break the tie or outweigh the opinion of a clearly biased reviewer. If you think there are bias, consult with the editor if a new set of reviewers may be included to complete the process, more often than not they will say no but if you raise a good point they might feel compelled to do so.
Science is a process that starts at the library and ends at the libraryDr. Jesús Gracia-Mora, School of Chemistry UNAM ca. the nineteen nineties
These are truths we must learn from a young age. Any science project does not end at the lab but at the library, therefore I let my students—even the undergrads—do the submission process of their manuscripts along with me, and involve them in the peer review process (sometimes and to some limited extent even when I’m the reviewer) just so they now that getting a rejection letter is part of the process and should never be equated with the relative quality or self-worth of a scientist since that is hardly what the publication process looks at.
So, in a nutshell, if you got a rejection letter, get back on the proverbial saddle and try again. And again. And once again.
Elucidating the pairing of non-hydrogen bonded unnatural base pairs (UBPs) is still a controversial subject due to the lack of specificity in their mutual interactions. Experimentally, NMR is the method of choice but the DNA strand must be affixed on template of sorts such as a polymerase protein. Those discrepancies are well documented in a recent review which cites our previous computational work, both DFT and MD, on UBPs.
Since that last paper of ours on synthetic DNA, my good friend Dr. Rodrigo Galindo from Utah U. and I have had serious doubts on the real pairing fashion exhibited by Romesberg’s famous hydrophobic nucleotides d5SICS – dNaM. While the authors claim a stacked pairing (within the context of the strand in the KlenTaq polymerase enzime), our simulations showed a Watson-Crick-like pairing was favored in the native form. To further shed light on the matter we performed converged micro-seconds long simulations, varying the force field (two recent AMBER fields were explored: Bsc1 and OL15), the water model (TIP3P and OPC), and the ionic compensation scheme (Na+/Cl– or Mg2+/Cl–).
In the image below it can be observed how the pairing is consistently WC (dC1′-C1′ ~10.4 A) in the most populated clusters regardless of the force field.
Also, a flipping experiment was performed where both nucleotides were placed 180.0° outwards and the system was left to converge inwards to explore a ‘de novo’ pairing guided solely by their mutual interactions and the template formed by the rest of the strand. Distance population for C1′ – C1′ were 10.4 A for Bsc1 (regardless of ionic compensation) and 9.8 A for OL15 (10.4 A where Mg2+ was used as charge compensation).
Despite the successful rate of replication by a living organism -which is a fantastic feat!- of these two nucleotides, there is little chance they can be used for real coding applications (biological or otherwise) due to the lack of structural control of the double helix. The work of Romesberg is impressive, make no mistake about it, but my money isn’t on hydrophobic unnatural nucleotides for information applications 🙂
All credit and glory is due to the amazing Dr. Rodrigo Galindo-Murillo from the University of Utah were he works as a developer for the AMBER code among many other things. Go check his impressive record!
The format of a research paper hasn’t changed much throughout history, despite the enormous changes in platforms available for their consumption and the near extinction of the library issue. Convenient electronic files such as PDFs still resemble printed-and-bound-in-issues papers in their layout instead of exploiting the seemingly endless capabilities of the electronic format.
For instance, why do we still need to have page numbers? a DOI is a full, traceable and unique identification for each work and there are so many nowadays that publishers have to pour them out as e-first, ASAPs, and just accepted before having them assigned page numbers, a process which is still a concern for some researchers (and even for some of the organizations funding them or evaluating their performance). Numbers for Issues, Volumes and Pages are library indexes needed to sort and retrieve information from physical journals but in the e-realm where one can browse all issues online, perform a search and download the results these indexes are hardly of any use, only the year is helpful in establishing a chronological order to the development of ideas. This brings me to the next issue (no pun intended): If bound-issues are no longer a thing then neither should be covers. Being selected for a cover is a huge honor, it means the editorial staff think your work stands out from the published works in the same period; but nowadays is an honor that comes to a price, sometimes a high price. With the existence of covers, back-covers, inner-covers and inner-back-covers and whatnot at USD$1,500 a piece, the honor gets a bit diluted. Advertisers know this and now they place their ads as banners, pop-ups and other online digital formats instead of -to some extent- paying for placing ads in the pages of the journals.
I recently posted a quick informal poll on Twitter about the scientific reading habits of chemists and I confirmed what I expected: only one in five still prefers to mostly read papers on actual paper*, the rest rely on an electronic version such as HTML full text or the most popular PDF on a suitable reader.
— Joaquin Barroso (@joaquinbarroso) June 3, 2019
What came as a surprise for me was that in the follow up poll, Reference Manager programs such as Mendeley, Zotero, EndNote or ReadCube are only preferred by 15% while 80% prefer the PDF reader (I’m guessing Acrobat Reader might be the most popular.) A minority seems to prefer the HTML full text version, which I think is the richest but hardly customizable for note taking, sharing, or, uhm hoarding.
A follow up on the previous poll. Dear #ChemTweeps, if you mostly read papers in electronic format what is your preferred platform?
— Joaquin Barroso (@joaquinbarroso) June 10, 2019
I’m a Mendeley user because I like the integration between users, its portability between platforms and the synchronization features but if I were to move to another reference manager software it would be ReadCube. I like taking notes, highlighting text, and adding summaries and ideas onto the file but above all I like the fact that I can conduct searches in the myriad of PDF files I’ve acumulated over the years. During my PhD studies I had piles of (physical) paper and folders with PDF files that sometimes were easier to print than to sort and organize (I even had a spreadsheet with the literature read-a nightmarish project in itself!)
So, here is my wish list for what I want e-papers in the 21st century to do. Some features are somewhat available in some journals and some can be achieved within the PDF itself others would require a new format or a new platform to be carried out. Please comment what other features would you like to have in papers.
- Say goodbye to the two columns format. I’m zooming to a single column anyway.
- Pop-up charts/plots/schemes/figures. Let me take a look at any graphical object by hovering (or 3D touching in iOS, whatever) on the “see Figure X” legend instead of having to move back and forth to check it, specially when the legend is “see figure SX” and I have to go to the Supporting Information file/section.
- Pop-up References. Currently some PDFs let you jump to the References section when you click on one but you can’t jump back but scroll and find the point where you left.
- Interactive objects. Structures, whether from X-ray diffraction experiments or calculations could be deposited as raw coordinates files for people to play with and most importantly to download** and work with. This would increase the hosting journals need to devote to each work so I’m not holding my breath.
- Audio output. This one should be trickier, but far most helpful. I commute long hours so having papers being read out loud would be a huge time-saver, but it has to be smart. Currently I make Siri read papers by opening them in the Mendeley app, then “select all“, “voice“, but when it hits a formula, or a set of equations the flow is lost (instead of reading water as ‘H-Two-O‘, it reads ‘H-subscript Two-O‘; try having the formula of a perovskite be read)
- A compiler that outputs the ‘traditional version‘ for printing. Sure, why not.
I realize this post may come out as shallow in view of the Plan-S or FAIR initiatives, sorry for that but comfort is not incompatible with accessibility.
What other features do you think research papers should have by now?
* It is true that our attention -and more importantly- our retention of information is not the same when we read on paper than on a screen. Recently there was an interview on this matter on Science Friday.
** I absolutely hate having a Supporting Information section with long PDF lists of coordinates to copy-paste and fix into a new input file. OpenBabel, people!
Calculating the pKa value for a Brønsted acid is very hard, like really hard. A full thermodynamic cycle (fig. 1) needs to be calculated along with the high-accuracy solvation free energy for each of the species under consideration, not to mention the use of expensive methods which will be reviewed here in another post in two weeks time.
Finding descriptors that help us circumvent the need for such sophisticated calculations can help great deal in estimating the pKa value of any given acid. We’ve been interested in the reactivity of σ-hole bearing groups in the past and just like Halogen, Tetrel, Pnicogen and Chalcogen bonds, Hydrogen bonds are highly directional and their strength depends on the polarization of the O-H bond. Therefore, we suggested the use of the maximum surface electrostatic potential (VS,max) on the acid hydrogen atom of carboxylic acids as a descriptor for the strength of their interaction with water, the first step in the deprotonation process.
We selected six basis sets; five density functionals; the MP2 method for a total of thirty-six levels of theory to optimize and calculate VS,max on thirty carboxylic acids for a grand total of 1,080 wavefunctions, which were later passed onto MultiWFN (all calculations were taken with PCM = water). Correlation with the experimental pKa values showed a great correlation across the levels of theory (R2 > 0.9), except for B3LYP. Still, the best correlations were obtained with LC-wPBE/cc-pVDZ and wB97XD/cc-pVDZ. From this latter level of theory the linear correlation yielded the following equation:
pKa = -0.2185(VS,max) + 16.1879
Differences in pKa turned out to be less than 0.5 units, which is remarkable for such a straightforward method; bear in mind that calculation of full thermodynamic cycles above chemical accuracy (1.0 kcal/mol) yields pKa differences above 1.0 units.
We then took this equation for a test with 10 different carboxylic acids and the prediction had a correlation of 98% (fig. 2)
I think this method can really catch on for a quick way to predict the pKa values of any carboxylic acid imaginable. We’re now working on the model extension to other groups (i.e. Bronsted bases) and putting together a black-box workflow so as to make it even more accessible and straightforward to use.
We’ve recently published this work in the journal Molecules, an open access publication. Thanks to Prof. Steve Scheiner for inviting us to participate in the special issue devoted to tetrel bonding. Thanks to Guillermo Caballero for the inception of this project and to Dr. Jacinto Sandoval for taking the time from his research in photosynthesis to work on this pet project of ours and of course the rest of the students (Gustavo Mondragón, Marco Diaz, Raúl Torres) whose hard work produced this work.
Two new papers on the development of chemosensors for different applications were recently published and we had the opportunity to participate in both with the calculation of electronic interactions.
A chemosensor requires to have a measurable response and calculating either that response from first principles based on the electronic structure, or calculating another physicochemical property related to the response are useful strategies in their molecular design. Additionally, electronic structure calculations helps us unveil the molecular mechanisms underlying their response and efficiency, as well as providing a starting point for their continuous improvement.
In the first paper, CdTe Quantum Dots (QD’s) are used to visualize in real time cell-membrane damages through a Gd Schiff base sensitizer (GdQDs). This probe interacts preferentially with a specific sequence motif of NHE-RF2 scaffold protein which is exposed during cell damage. This interactions yields intensely fluorescent droplets which can be visualized in real time with standard instrumentation. Calculations at the level of theory M06-2X/LANL2DZ plus an external double zeta quality basis set on Gd, were employed to characterize the electronic structure of the Gd³⁺ complex, the Quantum Dot and their mutual interactions. The first challenge was to come up with the right multiplicity for Gd³⁺ (an f⁷ ion) for which we had no experimental evidence of their magnetic properties. From searching the literature and talking to my good friend, inorganic chemist Dr. Vojtech Jancik it was more or less clear the multiplicity had to be an octuplet (all seven electrons unpaired).
As can be seen in figure 1a the Gd-N interactions are mostly electrostatic in nature, a fact that is also reflected in the Wiberg bond indexes calculated as 0.16, 0.17 and 0.21 (a single bond would yield a WBI value closer to 1.0).
PM6 optimizations were employed in optimizing the GdQD as a whole (figure 1f) and the MM-UFF to characterize their union to a peptide sequence (figure 2) from which we observed somewhat unsurprisingly that Gd³⁺interacts preferently with the electron rich residues.
This research was published in ACS Applied Materials and Interfaces. Thanks to Prof. Vojtech Adam from the Mendel University in Brno, Czech Republic for inviting me to collaborate with their interdisciplinary team.
The second sensor I want to write about today is a more closer to home collaboration with Dr. Alejandro Dorazco who developed a fluorescent porphyrin system that becomes chiefly quenched in the presence of Iodide but not with any other halide. This allows for a fast detection of iodide anions, related to some gland diseases, in aqueous samples such as urine. This probe was also granted a patent which technically lists yours-truly as an inventor, cool!
The calculated interaction energy was huge between I⁻ and the porphyrine, which supports the idea of a ionic interaction through which charge transfer interactions quenches the fluorescence of the probe. Figure 3 above shows how the HOMO largely resides on the iodide whereas the LUMO is located on the pi electron system of the porphyrine.
This research was published in Sensors and Actuators B – Chemical.
I began my path in computational chemistry while I still was an undergraduate student, working on my thesis under professor Cea at unam, synthesizing main group complexes with sulfur containing ligands. Quite a mouthful, I know. Therefore my first calculations dealt with obtaining Bond indexed for bidentate ligands bonded to tin, antimony and even arsenic; yes! I worked with arsenic once! Happily, I keep a tight bond (pun intended) with inorganic chemists and the recent two papers published with the group of Prof. Mónica Moya are proof of that.
In the first paper, cyclic metallaborates were formed with Ga and Al but when a cycle of a given size formed with one it didn’t with the other (fig 1), so I calculated the relative energies of both analogues while compensating for the change in the number of electrons with the following equation:
ΔE = E(MnBxOy) – nEM + nEM’ – E(M’nBxOy) Eq 1
A seamless substitution would imply ΔE = 0 when changing from M to M’
The calculated ΔE were: ΔE(3/3′) = -81.38 kcal/mol; ΔE(4/4′) = 40.61 kcal/mol; ΔE(5/5′) = 70.98 kcal/mol
In all, the increased stability and higher covalent character of the Ga-O-Ga unit compared to that of the Al analogue favors the formation of different sized rings.
Additionally, a free energy change analysis was performed to assess the relative stability between compounds. Changes in free energy can be obtained easily from the thermochemistry section in the FREQ calculation from Gaussian.
This paper is published in Inorganic Chemistry under the following citation: Erandi Bernabé-Pablo, Vojtech Jancik, Diego Martínez-Otero, Joaquín Barroso-Flores, and Mónica Moya-Cabrera* “Molecular Group 13 Metallaborates Derived from M−O−M Cleavage Promoted by BH3” Inorg. Chem. 2017, 56, 7890−7899
The second paper deals with heavier atoms and the bonds the formed around Yttrium complexes with triazoles, for which we calculated a more detailed distribution of the electronic density and concluded that the coordination of Cp to Y involves a high component of ionic character.
This paper is published in Ana Cristina García-Álvarez, Erandi Bernabé-Pablo, Joaquín Barroso-Flores, Vojtech Jancik, Diego Martínez-Otero, T. Jesús Morales-Juárez, Mónica Moya-Cabrera* “Multinuclear rare-earth metal complexes supported by chalcogen-based 1,2,3-triazole” Polyhedron 135 (2017) 10-16
We keep working on other projects and I hope we keep on doing so for the foreseeable future because those main group metals have been in my blood all this century. Thanks and a big shoutout to Dr. Monica Moya for keeping me in her highly productive and competitive team of researchers; here is to many more years of joint work.
As is the case of proteins, the functioning of DNA is highly dependent on its 3D structure and not just only on its sequence but the difference is that protein tertiary structure has an enormous variety whereas DNA is (almost) always a double helix with little variations. The canonical base pairs AT, CG stabilize the famous double helix but the same cannot be guaranteed when non-canonical -unnatural- base pairs (UBPs) are introduced.
When I first took a look at Romesberg’s UBPS, d5SICS and dNaM (throughout the study referred to as X and Y see Fig.1) it was evident that they could not form hydrogen bonds, in the end they’re substituted naphtalenes with no discernible ways of creating a synton like their natural counterparts. That’s when I called Dr. Rodrigo Galindo at Utah University who is one of the developers of the AMBER code and who is very knowledgeable on matters of DNA structure and dynamics; he immediately got on board and soon enough we were launching molecular dynamics simulations and quantum mechanical calculations. That was more than two years ago.
Our latest paper in Phys.Chem.Chem.Phys. deals with the dynamical and structural stability of a DNA strand in which Romesberg’s UBPs are introduced sequentially one pair at a time into Dickerson’s dodecamer (a palindromic sequence) from the Protein Data Bank. Therein d5SICS-dNaM pair were inserted right in the middle forming a trisdecamer; as expected, +10 microseconds molecular dynamics simulations exhibited the same stability as the control dodecamer (Fig.2 left). We didn’t need to go far enough into the substitutions to get the double helix to go awry within a couple of microseconds: Three non-consecutive inclusions of UBPs were enough to get a less regular structure (Fig. 2 right); with five, a globular structure was obtained for which is not possible to get a proper average of the most populated structures.
X and Y don’t form hydrogen bonds so the pairing is pretty much forced by the scaffold of the rest of the DNA’s double helix. There are some controversies as to how X and Y fit together, whether they overlap or just wedge between each other and according to our results, the pairing suggests that a C1-C1′ distance of 11 Å is most stable consistent with the wedging conformation. Still much work is needed to understand the pairing between X and Y and even more so to get a pair of useful UBPs. More papers on this topic in the near future.
Ever since I read the highly praised article by Floyd Romesberg in Nature back in 2013 I got really interested in synthetic biology. In said article, an unnatural base pair (UBP) was not only inserted into a DNA double strand in vivo but the organism was even able to reproduce the UBPs present in subsequent generations.
Inserting new unnatural base pairs in DNA works a lot like editing a computer’s code. Inserting a couple UBPs in vitro is like inserting a comment; it wont make a difference but its still there. If the DNA sequence containing the UBPs can be amplified by molecular biology techniques such as PCR it means that a polymerase enzyme is able to recognize it and place it in site, this is equivalent to inserting a ‘hello world’ section into a working code; it will compile but it’s pretty much useless. Inserting these UBPs in vivo means that the organism is able to thrive despite the large deformation in a short section of its genetic code, but having it replicated by the chemical machinery of the nucleus is an amazing feat that only a few molecules could allow.
The ultimate goal of synthetic biology would be to find a UBP which codes effectively and purposefully during translation of DNA.This last feat would be equivalent to inserting a working subroutine in a program with a specific purpose. But not only could the use of UBPs serve for the purposes of expanding the genetic code from a quaternary (base four) to a senary (base six) system: the field of DNA origami could also benefit from having an expansion in the chemical and structural possibilities of the famous double helix; marking and editing a sequence would also become easier by having distinctive sections with nucleotides other than A, T, C and G.
It is precisely in the concept of double helix that our research takes place since the available biochemical machinery for translation and replication can only work on a double helix, else, the repair mechanisms get activated or the DNA will just stop serving its purpose (i.e. the code wont compile).
My good friend, Dr. Rodrigo Galindo and I have worked on the simulation of Romesberg’s UBPs in order to understand the underlying structural, dynamical and electronic causes that made them so successful and to possibly design more efficient UBPs based on a set of general principles. A first paper has been accepted for publication in Phys.Chem.Chem.Phys. and we’re very excited for it; more on that in a future post.