Photosynthesis, the basis of life on Earth, is based on the capacity a living organism has of capturing solar energy and transform it into chemical energy through the synthesis of macromolecules like carbohydrates. Despite the fact that most of the molecular processes present in most photosynthetic organisms (plants, algae and even some bacteria) are well described, the mechanism of energy transference from the light harvesting molecules to the reaction centers are not entirely known. Therefore, in our lab we have set ourselves to study the possibility of some excitonic transference mechanisms between pigments (chlorophyll and its corresponding derivatives). It is widely known that the photophysical properties of chlorophylls and their derivatives stem from the electronic structure of the porphyrin and it is modulated by the presence of Mg but its not this ion the one that undergoes the main electronic transitions; also, we know that Mg almost never lies in the same plane as the porphyrin macrocycle because it bears a fifth coordination whether to another pigment or to a protein that keeps it in place (Figure 1).
During our calculations of the electronic structure of the pigments (Bacteriochlorophyll-a, BChl-a) present in the Fenna-Matthews-Olson complex of sulfur dependent bacteria we found that the Mg²⁺ ion at the center of one of these pigments could in fact create an intermolecular interaction with the C=C double bond in the phytol fragment which lied beneath the porphyrin ring.
This would be the first time that a dihapto coordination is suggested to occur in any chlorophyll and that on itself is interesting enough but we took it further and calculated the photophysical implications of having this fifth intramolecular dihapto coordination as opposed to a protein or none for that matter. Figure 3 shows that the calculated UV-Vis spectra (calculated with Time Dependent DFT at the CAM-B3LYP functional and the cc-pVDZ, 6-31G(d,p) and 6-31+G(d,p) basis sets). A red shift is observed for the planar configuration, respect to the five coordinated species (regardless of whether it is to histidine or to the C=C double bond in the phytyl moiety).
Before calculating the UV-Vis spectra, we had to unambiguously define the presence of this observed interaction. To that end we calculated to a first approximation the C-Mg Wiberg bond indexes at the CAM-B3LYP/cc-pVDZ level of theory. Both values were C(1)-Mg 0.022 and C(2)-Mg 0.032, which are indicative of weak interactions; but to take it even further we performed a non-covalent interactions analysis (NCI) under the Atoms in Molecules formalism, calculated at the M062X density which yielded the presence of the expected critical points for the η²Mg-(C=C) interaction. As a control calculation we performed the same calculation for Magnoscene just to unambiguously assign these kind of interactions (Fig 4, bottom).
This research is now available at the International Journal of Quantum Chemistry. A big shoutout and kudos to Gustavo “Gus” Mondragón for his work in this project during his masters; many more things come to him and our group in this and other research ventures.
I’ve lately reviewed a ton of papers whose titles begin with some version of “Computational studies of…“, “Theoretical studies of…” or even more subtly just subtitled “A theoretical/computational study” and even when I gotta confess this is probably something I’ve done once or twice myself, it got me thinking about the place and role of computational chemistry within chemistry itself.
As opposed to physicists, chemists are pressed to defend a utilitarian view of their work and possibly because of that view some computational chemists sometimes lose sight of their real contribution to a study, which is far from just performing a routine electronic structure calculation. I personally don’t like it when an experimental colleague comes asking for ‘some calculations’ without a clear question to be answered by them; Computational Chemistry is not an auxiliary science but a branch of physical chemistry in its own right, one that provides all the insight experiments -chemical or physical- sometimes cannot.
I’m no authority on authoring research papers but I encourage my students to think about the titles of their manuscripts in terms of what the manuscript most heavily relies on; whether it’s the phenomenon, the methodology or the object of the study, that should be further stressed on the title. Papers titled “Computational studies of…” usually are followed by ‘the object of study’ possibly overlooking the phenomenon observed throughout such studies. It is therefore a disservice to the science contained within the manuscript, just like experimental papers gain little from titles such as “Synthesis and Characterization of…“. It all comes down to finding a suitable narrative for our work, something that I constantly remind my students. It’s not about losing rigor or finding a way to oversell our results but instead to actually drive a point home. What did you do why and how. Anna Clemens, a professional scientific writer has a fantastic post on her blog about it and does it far better than I ever could. Also, when ranting on Twitter, the book Houston, we have a narrative was recommended to me, I will surely put it my to-read list.
While I’m on the topic of narratives in science, I’m sure Dr. Stuart Cantrill from Nature Chemistry wouldn’t mind if I share with you his deconstruction of an abstract. Let’s play a game and give this abstract a title in the comments section based on the information vested in it.
Recently, the journal ACS Central Science asked me to write a viewpoint for their First Reactions section about a research article by Prof. Alán Aspuru-Guzik from Harvard University on the evolution of the Fenna-Matthews-Olson (FMO) complex. It was a very rewarding experience to write this piece since we are very close to having our own work on FMO published as well (stay tuned!). The FMO complex remains a great research opportunity for understanding photosynthesis and thus the origin of life itself.
In said article, Aspuru-Guzik’s team climbed their way up a computationally generated phylogenetic tree for the FMO from different green sulfur bacteria by creating small successive mutations on the protein at a time while also calculating their photochemical properties. The idea is pretty simple and brilliant: perform a series of “educated guesses” on the structure of FMO’s ancestors (there are no fossil records of FMO so this ‘educated guesses’ are the next best thing) and find at what point the photochemistry goes awry. In the end the question is which led the way? did the photochemistry led the way of the evolution of FMO or did the evolution of FMO led to improved photochemistry?
Since both the article and viewpoint are both published as open access by the ACS, I wont take too much space here re-writing the whole thing and will instead exhort you to read them both.
Thanks for doing so!
Communication of scientific findings is an essential skill for any scientist, yet it’s one of those things some students are reluctant to do partially because of the infamous blank page scare. Once they are confronted to writing their thesis or papers they make some common mistakes like for instance not thinking who their audience is or not adhering to the main points. One of the the highest form of communication, believe it or not, is gossip, because gossip goes straight to the point, is juicy (i.e. interesting) and seldom needs contextualization i.e. you deliver it just to the right audience (that’s why gossiping about friends to your relatives is almost never fun) and you do it at the right time (that’s the difference between gossips and anecdotes). Therefore, I tell my students to write as if they were gossiping; treat your research in a good narrative way, because a poor narrative can make your results be overlooked.
I’ve read too many theses in which conclusions are about how well the methods work, and unless your thesis has to do with developing a new method, that is a terrible mistake. Methods work well, that is why they are established methods.
Take the following example for a piece of gossip: Say you are in a committed monogamous relationship and you have the feeling your significant other is cheating on you. This is your hypothesis. This hypothesis is supported by their strange behavior, that would be the evidence supporting your hypothesis; but be careful because there could also be anecdotal evidence which isn’t significant to your own as in the spouse of a friend had this behavior when cheating ergo mine is cheating too. The use of anecdotal evidence to support a hypothesis should be avoided like the plague. Then, you need an experimental setup to prove, or even better disprove, your hypothesis. To that end you could hack into your better half’s email, have them followed either by yourself or a third party, confronting their friends, snooping their phone, just basically about anything that might give you some information. This is the core of your research: your data. But data is meaningless without a conclusion, some people think data should speak for itself and let each reader come up with their own conclusions so they don’t get biased by your own vision and while there is some truth to that, your data makes sense in a context that you helped develop so providing your own conclusions is needed or we aren’t scientists but stamp collectors.
This is when most students make a terrible mistake because here is where gossip skills come in handy: When asked by friends (peers) what was it that you found out, most students will try to convince them that they knew the best algorithms for hacking a phone or that they were super conspicuous when following their partners or even how important was the new method for installing a third party app on their phones to have a text message sent every time their phone when outside a certain area, and yeah, by the way, I found them in bed together. Ultimately their question is left unanswered and the true conclusion lies buried in a lengthy boring description of the work performed; remember, you performed all that work to reach an ultimate goal not just for the sake of performing it.
Writers say that every sentence in a book should either move the story forward or show character; in the same way, every section of your scientific written piece should help make the point of your research, keep the why and the what distinct from the how, and don’t be afraid about treating your research as the best piece of gossip you’ve had in years because if you are a science student it is.
As if I didn’t have enough things to do I’m launching a new blog inspired by the #365papers hashtag on Twitter and the naturalproductman.wordpress.com blog. In it I’ll hopefully list, write a femto-review of all the papers I read. This new effort is even more daunting than the actual reading of the huge digital pile of papers I have in my Mendeley To-Be-Read folder, the fattest of them all. The papers therein wont be a comprehensive review of Comp.Chem. must-read papers but rather papers relevant to our lab’s research or curiosity.
Maybe I’ll include some papers brought to my attention by the group and they could do the review. The whole endeavor might flop in a few weeks but I want to give it a shot; we’ll see how it mutates and if it survives or not. So far I haven’t managed to review all papers read but maybe this post will prompt to do so if only to save some face. The domain of the new blog is compchemdigest.wordpress.com but I think it should have included the word MY at the beginning so as to convey the idea that it is only my own biased reading list. Anyway, if you’re interested share it and subscribe, those post will not be publicized.
Ever since I read the highly praised article by Floyd Romesberg in Nature back in 2013 I got really interested in synthetic biology. In said article, an unnatural base pair (UBP) was not only inserted into a DNA double strand in vivo but the organism was even able to reproduce the UBPs present in subsequent generations.
Inserting new unnatural base pairs in DNA works a lot like editing a computer’s code. Inserting a couple UBPs in vitro is like inserting a comment; it wont make a difference but its still there. If the DNA sequence containing the UBPs can be amplified by molecular biology techniques such as PCR it means that a polymerase enzyme is able to recognize it and place it in site, this is equivalent to inserting a ‘hello world’ section into a working code; it will compile but it’s pretty much useless. Inserting these UBPs in vivo means that the organism is able to thrive despite the large deformation in a short section of its genetic code, but having it replicated by the chemical machinery of the nucleus is an amazing feat that only a few molecules could allow.
The ultimate goal of synthetic biology would be to find a UBP which codes effectively and purposefully during translation of DNA.This last feat would be equivalent to inserting a working subroutine in a program with a specific purpose. But not only could the use of UBPs serve for the purposes of expanding the genetic code from a quaternary (base four) to a senary (base six) system: the field of DNA origami could also benefit from having an expansion in the chemical and structural possibilities of the famous double helix; marking and editing a sequence would also become easier by having distinctive sections with nucleotides other than A, T, C and G.
It is precisely in the concept of double helix that our research takes place since the available biochemical machinery for translation and replication can only work on a double helix, else, the repair mechanisms get activated or the DNA will just stop serving its purpose (i.e. the code wont compile).
My good friend, Dr. Rodrigo Galindo and I have worked on the simulation of Romesberg’s UBPs in order to understand the underlying structural, dynamical and electronic causes that made them so successful and to possibly design more efficient UBPs based on a set of general principles. A first paper has been accepted for publication in Phys.Chem.Chem.Phys. and we’re very excited for it; more on that in a future post.
Literature in synthetic chemistry is full of reactions that do occur but very little or no attention is payed to those that do not proceed. The question here is what can we learn from reactions that are not taking place even when our chemical intuition tells us they’re feasible? Is there valuable knowledge that can be acquired by studying the ‘anti-driving force’ that inhibits a reaction? This is the focus of a new manuscript recently published by our research group in Tetrahedron (DOI: 10.1016/j.tet.2016.05.058) which was the basis of Guillermo Caballero’s BSc thesis.
It is well known in organic chemistry that if a molecular structure has the possibility to be aromatic it can somehow undergo an aromatization process to achieve this more stable state. During some experimental efforts Guillermo Caballero found two compounds that could be easily regarded as non-aromatic tautomers of a substituted pyridine but which were not transformed into the aromatic compound by any means explored; whether by treatment with strong bases, or through thermal or photochemical reaction conditions.
These results led us to investigate the causes that inhibits these aromatization reactions to occur and here is where computational chemistry took over. As a first approach we proposed two plausible reaction mechanisms for the aromatization process and evaluated them with DFT transition state calculations at the M05-2x/6-31+G(d,p)//B3LYP/6-31+G(d,p) levels of theory. The results showed that despite the aromatic tautomers are indeed more stable than their corresponding non-aromatic ones, a high activation free energy is needed to reach the transition states. Thus, the barrier heights are the first reason why aromatization is being inhibited; there just isn’t enough thermal energy in the environment for the transformation to occur.
But this is only the proximal cause, we went then to search for the distal causes (i.e. the reasons behind the high energy of the barriers). The second part of the work was then the calculation of the delocalization energies and frontier molecular orbitals for the non-aromatic tautomers at the HF/cc-pVQZ level of theory to get insights for the large barrier heights. The energies showed a strong electron delocalization of the nitrogen’s lone pair to the oxygen atom in the carbonyl group. Such delocalization promoted the formation of an electron corridor formed with frontier and close-to-frontier molecular orbitals, resembling an extended push-pull effect. The hydrogen atoms that could promote the aromatization process are shown to be chemically inaccessible.
Further calculations for a series of analogous compounds showed that the dimethyl amino moiety plays a crucial role avoiding the aromatization process to occur. When this group was changed for a nitro group, theoretical calculations yielded a decrease in the barrier high, enough for the reaction to proceed. Electronically, the bonding electron corridor is interrupted due to a pull-pull effect that was assessed through the delocalization energies.
The identity of the compounds under study was assessed through 1H, 13C-NMR and 2D NMR experiments HMBC, HMQC so we had to dive head long into experimental techniques to back our calculations.
As part of an ongoing collaboration with the University of Arizona (UA) and the Center for Advanced Research and Studies (CINVESTAV – Saltillo), we are looking into the use of calix[n]arenes for bio-remediation agents capable to extract Arsenic (V) and (III) species from water. Water contamination by arsenic is a pressing issue in northern Mexico and the southern US, therefore any efforts aiming to their elimination has strong social and health repercussions.
As in previous studies, all calixarenes were optimized along with their corresponding guests within the cavity, namely H3AsO4, H2AsO4– and HAsO42- at the DFT level with the so-called Minnesota functionals by Truhlar and Cao, M06-2X/6-31G(d,p) level of theory. Interaction energies were calculated through the NBODel procedure. Calixarenes with R = SO3H and PO3H are the most promising leads. This study is now publishes in the Journal of Inclusion Phenomena and Macrocyclic Chemistry (DOI 10.1007/s10847-016-0617-0) as an online first article.
This article is also the first to be published by our undergraduate (and almost grad student in a month) Gustavo Mondragón who took this project on a side to his own research on photosynthesis.
Now my colleagues in Arizona and Saltillo, Prof. Reyes Sierra and Dr. Eddie López, respectively, will work on the experimental side of the project. Further calculations are being undertaken to extend this study to As(III) and to the use of other potential extracting materials such as metallic nanoparticles to which calixarenes could be covalently linked.
A new publication is now available in which we calculated the binding properties of a fluorescent water-soluble chemosensor for halides which is specially sensitive for chloride. Once again, we were working in collaboration with an experimental group who is currently involved in developing all kinds of sustainable chemosensors.
The electronic structure of the chromophore was calculated at the M06-2X/6-311++G(d,p) level of theory under the SMD solvation model (water) at various pH levels which was achieved simply by changing the protonation and charges upon the ligand. Wiberg bond indexes from the Natural Population Analysis showed strong interactions between the chloride ion and the chromophore. Also, Fukui indexes were calculated in order to find the most probable binding sites. A very interesting feature of this compound is its ability to form a cavity without being a macrocycle! I deem it a cavity because of the intramolecular interactions which prevent the entrance of solvent molecules but that can be reversibly disrupted for the inclusion of an anion. In the figure below you can observe the remarkable quenching effect chloride has on the anion.
A quick look to the Frontier Molecular Orbitals (FMO’s) show that the chloride anion acts as an electron donor to the sensor.
If you are interested in more details please check: Bazany-Rodríguez, I. J., Martínez-Otero, D., Barroso-Flores, J., Yatsimirsky, A. K., & Dorazco-González, A. (2015). Sensitive water-soluble fluorescent chemosensor for chloride based on a bisquinolinium pyridine-dicarboxamide compound. Sensors and Actuators B: Chemical, 221, 1348–1355. http://doi.org/10.1016/j.snb.2015.07.031
Thanks to Dr. Alejandro Dorazco from CCIQS for asking me to join him in this project which currently includes some other join ventures in the realm of molecular recognition.
As we approach to the end of another year, and with that the time where my office becomes covered with post-it notes so as to find my way back into work after the holidays, we celebrate another paper published! This time at the Journal of Physical Chemistry A as a follow up to this other paper published last year on JPC-C. Back then we reported the development of a selective sensor for Hg(II); this sensor consisted on 1-amino-8-naphthol-3,6-disulphonic acid (H-Acid) covalently bound to a modified silica SBA-15 surface. H-Acid is fluorescent and we took advantage of the fact that, when in the presence of Hg(II) in aqueous media, its fluorescence is quenched but not with other ions, even with closely related ions such as Zn(II) and Cd(II). In this new report we delve into the electronic reasons behind the quenching process by calculating the most important electronic transitions with the framework laid by the Time Dependent Density Functional Theory (TD-DFT) at the PBE0/cc-pVQZ level of theory (we also included an electron core potential on the heavy metal atoms in order to decrease the time of each calculation). One of the things I personally liked about this work is the combination of different techniques that were used to assess the photochemical phenomenon at hand; some of those techniques included calculation of various bond orders (Mayer, Fuzzy, Wiberg, delocalization indexes), time dependent DFT and charge transfer delocalizations. Although we calculated all these various different descriptors to account for changes in the electronic structure of the ligand which lead to the fluorescence quenching, only delocalization indexes as calculated with QTAIM were used to draw conclusion, while the rest are collected in the SI section.
Thanks a lot to my good friend and collaborator Dr. Pezhman Zarabadi-Poor for all his work, interest and insight into the rationalization of this phenomenon. This is our second paper published together. By the way, if any of you readers is aware of a way to finance a postdoc stay for Pezhman here at our lab, please send us a message because right now funding is scarce and we’d love to keep bringing you many more interesting papers.
For our research group this was the fourth paper published during 2014. We can only hope (and work hard) to have at least five next year without compromising their quality. I’m setting the goal to be 6 papers; we’ll see in a year if we delivered or not.
I’d like to also take this opportunity to thank all the readers of this little blog of mine for your visits and your live demonstrations of appreciation at various local and global meetings such as the ACS meeting in San Francisco and WATOC14 in Chile, it means a lot to me to know that the things I write are read; if I were to make any New Year’s resolutions it would be to reply quicker to questions posted because if you took the time to write I should take the time to reply.
I wish you all the best for 2015 in and out of the lab!