As if I didn’t have enough things to do I’m launching a new blog inspired by the #365papers hashtag on Twitter and the naturalproductman.wordpress.com blog. In it I’ll hopefully list, write a femto-review of all the papers I read. This new effort is even more daunting than the actual reading of the huge digital pile of papers I have in my Mendeley To-Be-Read folder, the fattest of them all. The papers therein wont be a comprehensive review of Comp.Chem. must-read papers but rather papers relevant to our lab’s research or curiosity.
Maybe I’ll include some papers brought to my attention by the group and they could do the review. The whole endeavor might flop in a few weeks but I want to give it a shot; we’ll see how it mutates and if it survives or not. So far I haven’t managed to review all papers read but maybe this post will prompt to do so if only to save some face. The domain of the new blog is compchemdigest.wordpress.com but I think it should have included the word MY at the beginning so as to convey the idea that it is only my own biased reading list. Anyway, if you’re interested share it and subscribe, those post will not be publicized.
Ever since I read the highly praised article by Floyd Romesberg in Nature back in 2013 I got really interested in synthetic biology. In said article, an unnatural base pair (UBP) was not only inserted into a DNA double strand in vivo but the organism was even able to reproduce the UBPs present in subsequent generations.
Inserting new unnatural base pairs in DNA works a lot like editing a computer’s code. Inserting a couple UBPs in vitro is like inserting a comment; it wont make a difference but its still there. If the DNA sequence containing the UBPs can be amplified by molecular biology techniques such as PCR it means that a polymerase enzyme is able to recognize it and place it in site, this is equivalent to inserting a ‘hello world’ section into a working code; it will compile but it’s pretty much useless. Inserting these UBPs in vivo means that the organism is able to thrive despite the large deformation in a short section of its genetic code, but having it replicated by the chemical machinery of the nucleus is an amazing feat that only a few molecules could allow.
The ultimate goal of synthetic biology would be to find a UBP which codes effectively and purposefully during translation of DNA.This last feat would be equivalent to inserting a working subroutine in a program with a specific purpose. But not only could the use of UBPs serve for the purposes of expanding the genetic code from a quaternary (base four) to a senary (base six) system: the field of DNA origami could also benefit from having an expansion in the chemical and structural possibilities of the famous double helix; marking and editing a sequence would also become easier by having distinctive sections with nucleotides other than A, T, C and G.
It is precisely in the concept of double helix that our research takes place since the available biochemical machinery for translation and replication can only work on a double helix, else, the repair mechanisms get activated or the DNA will just stop serving its purpose (i.e. the code wont compile).
My good friend, Dr. Rodrigo Galindo and I have worked on the simulation of Romesberg’s UBPs in order to understand the underlying structural, dynamical and electronic causes that made them so successful and to possibly design more efficient UBPs based on a set of general principles. A first paper has been accepted for publication in Phys.Chem.Chem.Phys. and we’re very excited for it; more on that in a future post.
Literature in synthetic chemistry is full of reactions that do occur but very little or no attention is payed to those that do not proceed. The question here is what can we learn from reactions that are not taking place even when our chemical intuition tells us they’re feasible? Is there valuable knowledge that can be acquired by studying the ‘anti-driving force’ that inhibits a reaction? This is the focus of a new manuscript recently published by our research group in Tetrahedron (DOI: 10.1016/j.tet.2016.05.058) which was the basis of Guillermo Caballero’s BSc thesis.
It is well known in organic chemistry that if a molecular structure has the possibility to be aromatic it can somehow undergo an aromatization process to achieve this more stable state. During some experimental efforts Guillermo Caballero found two compounds that could be easily regarded as non-aromatic tautomers of a substituted pyridine but which were not transformed into the aromatic compound by any means explored; whether by treatment with strong bases, or through thermal or photochemical reaction conditions.
These results led us to investigate the causes that inhibits these aromatization reactions to occur and here is where computational chemistry took over. As a first approach we proposed two plausible reaction mechanisms for the aromatization process and evaluated them with DFT transition state calculations at the M05-2x/6-31+G(d,p)//B3LYP/6-31+G(d,p) levels of theory. The results showed that despite the aromatic tautomers are indeed more stable than their corresponding non-aromatic ones, a high activation free energy is needed to reach the transition states. Thus, the barrier heights are the first reason why aromatization is being inhibited; there just isn’t enough thermal energy in the environment for the transformation to occur.
But this is only the proximal cause, we went then to search for the distal causes (i.e. the reasons behind the high energy of the barriers). The second part of the work was then the calculation of the delocalization energies and frontier molecular orbitals for the non-aromatic tautomers at the HF/cc-pVQZ level of theory to get insights for the large barrier heights. The energies showed a strong electron delocalization of the nitrogen’s lone pair to the oxygen atom in the carbonyl group. Such delocalization promoted the formation of an electron corridor formed with frontier and close-to-frontier molecular orbitals, resembling an extended push-pull effect. The hydrogen atoms that could promote the aromatization process are shown to be chemically inaccessible.
Further calculations for a series of analogous compounds showed that the dimethyl amino moiety plays a crucial role avoiding the aromatization process to occur. When this group was changed for a nitro group, theoretical calculations yielded a decrease in the barrier high, enough for the reaction to proceed. Electronically, the bonding electron corridor is interrupted due to a pull-pull effect that was assessed through the delocalization energies.
The identity of the compounds under study was assessed through 1H, 13C-NMR and 2D NMR experiments HMBC, HMQC so we had to dive head long into experimental techniques to back our calculations.
As part of an ongoing collaboration with the University of Arizona (UA) and the Center for Advanced Research and Studies (CINVESTAV – Saltillo), we are looking into the use of calix[n]arenes for bio-remediation agents capable to extract Arsenic (V) and (III) species from water. Water contamination by arsenic is a pressing issue in northern Mexico and the southern US, therefore any efforts aiming to their elimination has strong social and health repercussions.
As in previous studies, all calixarenes were optimized along with their corresponding guests within the cavity, namely H3AsO4, H2AsO4– and HAsO42- at the DFT level with the so-called Minnesota functionals by Truhlar and Cao, M06-2X/6-31G(d,p) level of theory. Interaction energies were calculated through the NBODel procedure. Calixarenes with R = SO3H and PO3H are the most promising leads. This study is now publishes in the Journal of Inclusion Phenomena and Macrocyclic Chemistry (DOI 10.1007/s10847-016-0617-0) as an online first article.
This article is also the first to be published by our undergraduate (and almost grad student in a month) Gustavo Mondragón who took this project on a side to his own research on photosynthesis.
Now my colleagues in Arizona and Saltillo, Prof. Reyes Sierra and Dr. Eddie López, respectively, will work on the experimental side of the project. Further calculations are being undertaken to extend this study to As(III) and to the use of other potential extracting materials such as metallic nanoparticles to which calixarenes could be covalently linked.
As we approach to the end of another year, and with that the time where my office becomes covered with post-it notes so as to find my way back into work after the holidays, we celebrate another paper published! This time at the Journal of Physical Chemistry A as a follow up to this other paper published last year on JPC-C. Back then we reported the development of a selective sensor for Hg(II); this sensor consisted on 1-amino-8-naphthol-3,6-disulphonic acid (H-Acid) covalently bound to a modified silica SBA-15 surface. H-Acid is fluorescent and we took advantage of the fact that, when in the presence of Hg(II) in aqueous media, its fluorescence is quenched but not with other ions, even with closely related ions such as Zn(II) and Cd(II). In this new report we delve into the electronic reasons behind the quenching process by calculating the most important electronic transitions with the framework laid by the Time Dependent Density Functional Theory (TD-DFT) at the PBE0/cc-pVQZ level of theory (we also included an electron core potential on the heavy metal atoms in order to decrease the time of each calculation). One of the things I personally liked about this work is the combination of different techniques that were used to assess the photochemical phenomenon at hand; some of those techniques included calculation of various bond orders (Mayer, Fuzzy, Wiberg, delocalization indexes), time dependent DFT and charge transfer delocalizations. Although we calculated all these various different descriptors to account for changes in the electronic structure of the ligand which lead to the fluorescence quenching, only delocalization indexes as calculated with QTAIM were used to draw conclusion, while the rest are collected in the SI section.
Thanks a lot to my good friend and collaborator Dr. Pezhman Zarabadi-Poor for all his work, interest and insight into the rationalization of this phenomenon. This is our second paper published together. By the way, if any of you readers is aware of a way to finance a postdoc stay for Pezhman here at our lab, please send us a message because right now funding is scarce and we’d love to keep bringing you many more interesting papers.
For our research group this was the fourth paper published during 2014. We can only hope (and work hard) to have at least five next year without compromising their quality. I’m setting the goal to be 6 papers; we’ll see in a year if we delivered or not.
I’d like to also take this opportunity to thank all the readers of this little blog of mine for your visits and your live demonstrations of appreciation at various local and global meetings such as the ACS meeting in San Francisco and WATOC14 in Chile, it means a lot to me to know that the things I write are read; if I were to make any New Year’s resolutions it would be to reply quicker to questions posted because if you took the time to write I should take the time to reply.
I wish you all the best for 2015 in and out of the lab!
Well, I only contributed with the theoretical section by doing electronic structure calculations, so it isn’t really a paper we can ascribe to this particular lab, however it is really nice to see my name in JACS along such a prominent researcher as Prof. Chad Mirkin from Northwestern University, in a work closely related to my area of research interest as macrocyclic recognition agents.
In this manuscript, a calixarene is allosterically opened and closed reversibly by coordinating different kinds of ligands to a platinum center linked to the macrocycle. (This approach has been referred to as the weak link approach.) I recently visited Northwestern and had a great time with José Mendez-Arroyo, the first author, who showed me around and opened the possibility for further work between our research groups.
Closed, semi-open and fully open conformations; selectivity is modulated through cavity size. (Ligands: Green = Chloride; Blue = Cyanide)
Here at UNAM we calculated the interaction energies for the two guests that were successfully inserted into the cavity: N-methyl-pyridinium (Eint = 57.4 kcal/mol) and Pyridine-N-oxide (Eint = +200.0 kcal/mol). Below you can see the electrostatic potential mapped onto the electron density isosurface for one of the adducts. Relative orientation of the hosts within the cavity follows the expected (anti-) alignment of mutual dipole moments. At this level of theory, we could easily be inclined to assert that the most stable interaction is indeed the one from the semi-open compound and that this in turn is due to the fact that host and guest are packed closer together but there is also an orbital issue: Pyridine Oxide is a better electron acceptor than N-Me-pyridinium and when we take a closer look to the (Natural Bonding) orbitals interacting it becomes evident that a closer location does not necessarily yields a stronger interaction when the electron accepting power of the ligand is weaker (which is, in my opinion, both logic and at the same time a bit counterintuitive, yet fascinating, nonetheless).
All calculations were performed at the B97D/LANL2DZ level of theory with the use of Gaussian09 and NBO3.1 as provided within the former. Computing time at UNAM’s supercomputer known as ‘Miztli‘ is fully acknowledged.
The full citation follows:
A Multi-State, Allosterically-Regulated Molecular Receptor With Switchable Selectivity
Jose Mendez-Arroyo †, Joaquín Barroso-Flores §,Alejo M. Lifschitz †, Amy A. Sarjeant †, Charlotte L. Stern †, and Chad A. Mirkin *†
Thanks to José Mendez-Arroyo for contacting me and giving me the opportunity to collaborate with his research; I’m sure this is the first of many joint projects that will mutually benefit our groups.
Happy new year to all my readers!
Having a new paper published is always a matter of happiness for this computational chemist but this time I’m excedingly excited about anouncing the publishing of a paper in the Journal of Chemical Theory and Computation, which is my highest ranked publication so far! It also establishes the consolidation of our research group at CCIQS as a solid and competitive group within the field of theoretical and computational chemistry. The title of our paper is “In Silico design of monomolecular drug carriers for the tyrosine kinase inhibitor drug Imatinib based on calix- and thiacalix[n]arene host molecules. A DFT and Molecular Dynamics study“.
In this article we aimed towards finding a suitable (thia-) calix[n]arene based drug delivery agent for the drug Imatinib (Gleevec by Novartis), which is a broadly used powerful Tyrosine Kinase III inhibitor used in the treatment of Chronic Myeloid Leukaemia and, to a lesser extent, Gastrointestinal Stromal Tumors; although Imatinib (IMB) exhibits a bioavailability close to 90% most of it is excreted, becomes bound to serum proteins or gets accumulated in other tissues such as the heart causing several undesired side effects which ultimately limit its use. By using a molecular capsule we can increase the molecular weight of the drug thus increasing its retention, and at the same time we can prevent Imatinib to bind, in its active form, to undesired proteins.
We suggested 36 different calix and thia-calix[n]arenes (CX) as possible candidates; IMB-CX complexes were manually docked and then optimized at the B97D/6-31G(d,p) level of theory; Stephan Grimme’s B97D functional was selected for its inclusion of dispersion terms, so important in describing π-π interactions. Intermolecular interaction energies were calculated under the Natural Bond Order approximation; a stable complex was needed but a too stable complex would never deliver its drug payload! This brings us to the next part of the study. A monomolecular drug delivery agent must be able to form a stable complex with the drug but it must also be able to release it. Molecular Dynamics simulations (+100 ns) and umbrella sampling methods were used to analyse the release of the drug into the aqueous media.
Potential Mean Force profiles for the four most stable complexes for position N1 and N2 from the QM simulations are shown below (Red, complexes in the N1 position, blue, N2 position). These plots, derived from the MD simulations give us an idea of the final destination of the drug respect of the calixarene carrier. In the next image, the three preferred structures (rotaxane-like; inside; released) for the final outcome of the delivery process are shown. The stability of the complexes was also assessed by calculating the values of ΔG binding through the use of the Poisson equations.
Thanks to my co-authors Maria Eugenia Sandoval-Salinas and Dr. Rodrigo Galindo-Murillo for their enormous contributions to this work; without their hard work and commitment to the project this paper wouldn’t have been possible.
This post was inspired by this other one, featured in WordPress’ Freshly Pressed section, on how should non-scientist read a scientific paper. While the approach presented therein is both valid and valuable, I’d like to address the way I think a scientist should read a paper, given the fact that we need to read a lot of them at all times. Each scientist has their own reading style, not to mention their own writing style, and while my CV could indicate I don’t know how to do neither one, here I present to you my scientific-paper-reading style which I consider to be the most suitable for me.
I’d like to start by emphasizing that I dive into scientific literature in a bona fide fashion. That is not to say I’m totally naive or even gullible, but even when science is all about questioning and casting doubt onto all sorts of claims, we can’t re-develop every bit of science we need. At a certain point we must start
*gasp* believing trusting other scientists’ claims. Reading in what I call bona fide is not mutually exclusive with critical reading. This sort of scientific trust is earned, to a degree, mostly by two indicators: Author’s preceding reputation at the time of publication of any given paper as well as the journal’s. Both indicators aren’t without controversy and flaw.
The way I read a paper is the following: I start with the Abstract, then follow with the Conclusions, then the Results section, sometimes I read the details of the methodology and seldom read the Introduction. Let me explain.
I read the abstract first because I read in bona fide as I hope the authors wrote the paper in bona fide. If properly written, the abstract should include all the relevant information as to what was done, why, and how but also point to the knowledge derived from it all: Their conclusions! and that is why I follow with that section. I’m interested in knowing what the authors learned and ultimately want me to learn about their study. Once again I’m reading in bona fide, so I hope they weren’t tempering their results to fit their preconceptions, that all experiments were thoroughly self-judged, validated, correlated, referenced and controlled. Recently, my sister Janet, who is a physicist working on her PhD in neuroscience, told me about some friends of hers who never (shall I say, never have I ever?) read the conclusions as to not becoming biased by the authors. To me it seems like too much work having to scrutinize every piece of data again in order to come up with my own conclusions when authors, collaborators, people on the hallway down the lab (optional), referees and editors (vide infra) have already (hopefully) done it (properly). Still I put on my scientist badge and question everything I critically read in the results section trying thus to understand how did the authors reached their conclusions and asking myself if I could come up with something entirely different. No? OK, how about something slightly different? Still no? Well, do I agree with the authors on their findings and their observed results? And so on. I like thinking that my critical reading process resembles the Self Consistent Field method which iteratively reaches the best wavefunction for a set of certain given conditions, but it never reaches the exact one.
The methodology section is a bit tricky, specially when it comes to computational chemistry. Back when I was a grad student, working in an inorganic chemistry lab, I’d only read the methodology if I had any plans of reproducing the experiment, other than that I didn’t care too much if reagents were purchased from Aldrich or Fluka or if the spectrophotometer was a Perkin Elmer one, I just expected authors to have purified their reagents prior to usage and calibrated all spectrophotometers. Now in computational chemistry I read about the methods employed, which functional and what basis set were used and why were they selected are my most frequent questions, but the level of theory is usually stated in the abstract. I also take a look at what methods were used to calculate which properties; these questions are important when we have to validate our trust in the results in front of us.
Finally, I seldom read the introduction because, if the paper is relevant to my own research, I don’t need to read why is important or interesting, I’m already sold on that premise! that is why I’m reading the paper in the first place! If both me and the author act in bona fide, we both already know what the state of the art is, so lets move on because I have a ton of other papers to read. Hence, I read the introduction only when I’m trying to immerse myself in a new field or when reading something that seems interesting but which has little to do with my area of expertise. There is another reason why I almost never read introductions and that is that, even when I try to work in bona fide, there are a lot of people out there who don’t. Twice have I received the reviews from a mysterious referee who believes it would serve the work a great deal to cite two, maybe three, other papers which he or she lists for your convenience, only to find out that they all belong to the same author in each case and that they are not quite entirely related to the manuscript.
In the title of this post I also try to address the writing of a scientific paper, although I’m not an authority on it, I think today’s key phrase is bona fide. So to young and not so young scientists out there I’d ask you to write in bona fide, please. Be concise. Be convincing. Be thorough and be critical. This is science we are doing, not stamp collecting. It shouldn’t be about getting all sorts of things out there, it is about expanding the knowledge of the human race one paper at a time. But we are humans; therefore we are flawed. More and more cases of scientific misconduct are found throughout the literature and nowadays, with the speed of blogging and tweeting, we can point at too many of them. The role of bloggers in pointing this frauds, of which I’ve written before here, is the subject of recent controversy and possibly the topic of a future post. We are all being scrutinized in our work but that shouldn’t be an excuse to make up data, tinker or temper with it, to push our own personal agendas or to gain prestige in an otherwise wild academic environment.
I for one may never publish in Science or Nature; I may never be selected for any important prize, but even the promise of achieving any of those is not worth the guilt trip of lying to an entire academic society. I try then, to always remember that science is not about getting the best answers, but about posing the right questions.
What is your own style for reading papers? Any criticism to my style? How different is the style of a grad student from that of a researcher?
As usual thanks for reading, rating and commenting!
Having a new paper out is always fun and this week we got the wonderful news from the Journal of Physical Chemistry C that a paper I co authored with Prof. Alireza Badiei at the University of Tehran in Iran and his student, who actually got us all in touch, Dr. Pezhman Zarabadi-Poor, was accepted for publication.
The paper is titled “Selective Optical Sensing of Hg(II) in Aqueous Media by H-Acid/SBA-15: A Combined Experimental and Theoretical Study“; in it we explored the fluorescence quenching mechanism for a Hg(II) complex which forms the basis of a novel selective mercury detector. Geometry optimizations were carried out at the PBE0/6-31++G** PCM level of theory (along with the aug-cc-pVDZ-PP basis set and corresponding ECP for Hg), also the electronic spectrum of both the free acid and the Hg(II) complex was calculated.
(Frontier orbitals were depicted using Chemcraft)
We can observe that HOMO and LUMO+1 are mainly located on the naphtalene ring allowing for the S0 -> S1 transition and back, which accounts for the molecular fluorescence. Other internal conversion processes were also assessed and discussed in the paper which accounts for the quenching effect. In short, we have obtained a full quantum description of the mechanism by which coordination of the free acid to Hg(II) alters the ligand’s electronic structure converting its emisive lowest-lying excited state to a dark state, i.e., quenching! Pretty cool stuff!
Once again thanks to both Dr. Zarabadi-Poor and Prof. Badiei for thinking about me for collaborating with them in this joint endeavor which hopefully wont be our last. A PDF copy of the article is available by direct request through this post.
Thanks for reading, sharing, rating and commenting.
I don’t know why I haven’t written about the Local Bond Order (LBO) before! And a few days ago when I thought about it my immediate reaction was to shy away from it since it would constitute a blatant self-promotion attempt; but hell! this is my blog! A place I’ve created for my blatant self-promotion! So without further ado, I hereby present to you one of my own original contributions to Theoretical Chemistry.
During the course of my graduate years I grew interested in weakly bonded inorganic systems, namely those with secondary interactions in bidentate ligands such as xanthates, dithiocarboxylates, dithiocarbamates and so on. Description of the resulting geometries around the central metallic atom involved the invocation of secondary interactions defined purely by geometrical parameters (Alcock, 1972) in which these were defined as present if the interatomic distance was longer than the sum of their covalent radii and yet smaller than the sum of their van der Waals radii. This definition is subject to a lot of constrictions such as the accuracy of the measurement, which in turn is related to the quality of the monocrystal used in the X-ray difraction experiment; the used definition of covalent radii (Pauling, Bondi, etc.); and most importantly, it doesn’t shed light on the roles of crystal packing, intermolecular contacts, and the energetics of the interaction.
This is why in 2004 we developed a simple yet useful definition of bond order which could account for a single molecule in vacuo the strength and relevance of the secondary interaction, relative to the well defined covalent bonds.
Barroso-Flores, J. et al. Journal of Organometallic Chemistry 689 (2004) 2096–2102 http://dx.doi.org/10.1016/j.jorganchem.2004.03.035,
Let a Molecular Orbital be defined as a wavefunction ψi which in turn may be constructed by a linear combination of Atomic Orbitals (or atom centered basis set functions) φj
We define ζLBO in the following way, where we explicitly take into account a doubly occupied orbital (hence the multiplication by 2) and therefore we are assuming a closed shell configuration in the Restricted formalism.
The summation is carried over all the orbitals which belong to atom A1 and those of atom A2.
Simplifying we yield,
where Sjk is the overlap integral for the φj and φk functions.
By summing over all i MOs we have accomplished with this definition to project all the MO’s onto the space of those functions centered on atoms A1 and A2. This definition is purely quantum mechanical in nature and is independent from any geometric requirement of such interacting atoms (i.e. interatomic distance) thus can be used as a complement to the internuclear distance argument to assess the interaction between them. This definition also results very simple and easy to calculate for all you need are the coefficients to the LCAO expansion and the respective overlap integrals.
Unfortunately, the Local Bond Order hasn’t found much echo, partly due to the fact that it is hidden in a missapropriate journal. I hope someone finds it interesting and useful; if so, don’t forget to cite it appropriately 😉