Category Archives: Computational Chemistry

Percentage of Molecular Orbital Composition – G09,G16


Canonical Molecular Orbitals are–by construction–delocalized over the various atoms making up a molecule. In some contexts it is important to know how much of any given orbital is made up by a particular atom or group of atoms, and while you could calculate it by hand given the coefficients of each MO in terms of every AO (or basis set function) centered on each atom there is a straightforward way to do it in Gaussian.

If we’re talking about ‘dividing’ a molecular orbital into atomic components, we’re most definitely talking about population analysis calculations, so we’ll resort to the pop keyword and the orbitals option in the standard syntax:

#p M052x/cc-pVDZ pop=orbitals

This will produce the following output right after the Mulliken population analysis section:

Atomic contributions to Alpha molecular orbitals:
 Alpha occ 140 OE=-0.314 is Pt1-d=0.23 C38-p=0.16 C31-p=0.16 C36-p=0.16 C33-p=0.15
 Alpha occ 141 OE=-0.313 is Pt1-d=0.41
 Alpha occ 142 OE=-0.308 is Cl2-p=0.25
 Alpha occ 143 OE=-0.302 is Cl2-p=0.72 Pt1-d=0.18
 Alpha occ 144 OE=-0.299 is Cl2-p=0.11
 Alpha occ 145 OE=-0.298 is C65-p=0.11 C58-p=0.11 C35-p=0.11 C30-p=0.11
 Alpha occ 146 OE=-0.293 is C58-p=0.10
 Alpha occ 147 OE=-0.291 is C22-p=0.09
 Alpha occ 148 OE=-0.273 is Pt1-d=0.18 C11-p=0.12 C7-p=0.11
 Alpha occ 149 OE=-0.273 is Pt1-d=0.18
 Alpha vir 150 OE=-0.042 is C9-p=0.18 C13-p=0.18
 Alpha vir 151 OE=-0.028 is C7-p=0.25 C16-p=0.11 C44-p=0.11
 Alpha vir 152 OE=0.017 is Pt1-p=0.10
 Alpha vir 153 OE=0.021 is C36-p=0.15 C31-p=0.14 C63-p=0.12 C59-p=0.12 C38-p=0.11 C33-p=0.11
 Alpha vir 154 OE=0.023 is C36-p=0.13 C31-p=0.13 C63-p=0.11 C59-p=0.11
 Alpha vir 155 OE=0.027 is C65-p=0.11 C58-p=0.10
 Alpha vir 156 OE=0.029 is C35-p=0.14 C30-p=0.14 C65-p=0.12 C58-p=0.11
 Alpha vir 157 OE=0.032 is C52-p=0.09
 Alpha vir 158 OE=0.040 is C50-p=0.14 C22-p=0.13 C45-p=0.12 C17-p=0.11
 Alpha vir 159 OE=0.044 is C20-p=0.15 C48-p=0.14 C26-p=0.12 C54-p=0.11

Alpha and Beta densities are listed separately only in unrestricted calculations, otherwise only the first is printed. Each orbital is listed sequentially (occ = occupied; vir = virtual) with their energy value (OE = orbital energy) in atomic units following and then the fraction with which each atom contributes to each MO.

By default only the ten highest occupied orbitals and ten lowest virtual orbitals will be assessed, but the number of MOs to be analyzed can be modified with orbitals=N, if you want to have all orbitals analyzed then use the option AllOrbitals instead of just orbitals. Also, the threshold used for printing the composition is set to 10% but it can be modified with the option ThreshOrbitals=N, for the same compound as before here’s the output lines for HOMO and LUMO (MOs 149, 150) with ThreshOrbitals set to N=1, i.e. 1% as occupation threshold (ThreshOrbitals=1):

Alpha occ 149 OE=-0.273 is Pt1-d=0.18 N4-p=0.08 N6-p=0.08 C20-p=0.06 C13-p=0.06 C48-p=0.06 C9-p=0.06 C24-p=0.05 C52-p=0.05 C16-p=0.04 C44-p=0.04 C8-p=0.03 C15-p=0.03 C17-p=0.03 C45-p=0.02 C46-p=0.02 C18-p=0.02 C26-p=0.02 C54-p=0.02 N5-p=0.01 N3-p=0.01
Alpha vir 150 OE=-0.042 is C9-p=0.18 C13-p=0.18 C44-p=0.08 C16-p=0.08 C15-p=0.06 C8-p=0.06 N6-p=0.04 N4-p=0.04 C52-p=0.04 C24-p=0.04 N5-p=0.03 N3-p=0.03 C46-p=0.03 C18-p=0.03 C48-p=0.02 C20-p=0.02

The fragment=n label in the coordinates can be used as in BSSE Counterpoise calculations and the output will show the orbital composition by fragments with the label "Fr", grouping all contributions to the MO by the AOs centered on the atoms in that fragment.

As always, thanks for reading, sharing, and rating. I hope someone finds this useful.

Au(I) Chemistry No.3 – New paper in Dalton Transactions


Stabilizing Gold in low oxidation states is a longstanding challenge of organometallic chemistry. To do so, a fine tuning of the electron density provided to an Au atom by a ligand via the formation of a σ bond. The group of Professor Rong Shang at the University of Nagasaki has accomplished the stabilization of an aurate complex through the use of a boron, nitrogen-containing heterocyclic carbene; DFT calculations at the wB97XD/(LANL2TZ(f),6-311G(d)) level of theory revealed that this ligand exhibits a high π-withdrawing character of the neutral 4π B,N-heterocyclic carbene (BNC) moiety and a 6π weakly aromatic character with π-donating properties, implying that this is the first cyclic carbene ligand that is able to be tuned between π-withdrawing (Fischer-type)- and π-donating (Schrock-type) kinds.

A π-withdrawing character on part of the ligand is important to allow the electron-rich gold center back donate some of its excess electron density, this way preventing its oxidation. A modification of Bertrand’s cyclic (alkyl)(amino)carbene (CAAC) has allowed Shang and co-workers to perform the two electrons Au(I) reduction to form the aurate shown in figure 1 (CCDC 2109027). This work also reports on the modular synthesis of the BNC-1 ligand and the mechanism was calculated once again by Leonardo “Leo” Lugo.

Figure 1. Compound 4a (H atoms omitted for clarity)

The ability of the BNC-1 ligand to accept gold’s back donation is reflected on the HOMO/LUMO gap as shown in Figure 2; while BNC-1 has a gap of 7.14 eV, the classic NHC carbene has a gap of 11.28 eV, furthermore, in the case of NHC the accepting orbital is not LUMO but LUMO+1. Additionally, the NBO delocalization energies show that the back donation from Au 5d orbital to the C-N antibonding π* orbital is about half that expected for a Fischer type carbene, suggesting an intermediate character between π accepting and π donating carbene. On the other hand, the largest interaction corresponds to the carbanion density donated to Au vacant p orbital (ca. 45 kcal/mol). All these observations reveal the successful tuning of the electron density on BNC-1.

Figure 2. Frontier Molecular Orbitals for the ligand BNC-1 and a comparison to similar carbenes used elsewhere

This study is available in Dalton Transactions. As usual, I’m honored to be a part of this international collaboration, and I’m deeply thankful to the amazing Prof. José Oscar Carlos Jiménez-Halla for inviting me to be a part of it.

Yoshitaka Kimura, Leonardo I. Lugo-Fuentes, Souta Saito, J. Oscar C. Jimenez-HallaJoaquín Barroso-FloresYohsuke YamamotoMasaaki Nakamoto and Rong Shang* “A boron, nitrogen-containing heterocyclic carbene (BNC) as a redox active ligand: synthesis and characterization of a lithium BNC-aurate complex”, Dalton Trans., 2022,51, 7899-7906 https://doi.org/10.1039/D2DT01083F

DFT beyond academia


Density Functional Theory is by far the most successful way of gaining access to molecular properties starting from their composition. Calculating the electronic structure of molecules or solid phases has become a widespread activity in computational as well as in experimental labs not only for shedding light on the properties of a system under study but also as a tool to design those systems with taylor-made properties. This level of understanding of matter brought by DFT is based in a rigorous physical and mathematical development, still–and maybe because of it–DFT (and electronic structure calculations in general for that matter) might be thought of as something of little use outside academia.

Prof. Juan Carlos Sancho-García from the University of Alicante in Spain, encouraged me to talk to his students last month about the reaches of DFT in the industrial world. Having once worked in the IP myself I remembered the simulations performed there were mostly DPD (Dissipative Particle Dynamics), a coarse grained kind of molecular dynamics, for investigating the interactions between polymers and surfaces, but no DFT calculations were ever on sight. It is widely known that Docking, QSAR, and Molecular Dynamics are widely used in the pharma industry for the development of new drugs but I wasn’t sure where DFT could fit in all this. I thought patent search would be a good descriptor for the commercial applicability of DFT. So I took a shallow dive and searched for patents explicitly mentioning the use of DFT as part of the invention development process and protection. The first thing I noticed is that although they appear to be only a few, these are growing in numbers throughout the years (Figure 1). Again, this was not an exhaustive search so I’m obviously overlooking many.

Figure 1 – A non-exhaustive search in a patents database

The second thing that caught my attention was that the first hit came from 1998, nicely coinciding with the rise of B3LYP (Figure 2). This patent was awarded to Australian inventors from the University of Wollongong, South New Wales to determine trace gas concentrations by chromatography by means of calculating the FT-IR spectra of sample molecules (Figure 3), so DFT is used as part of the invention but I ignore if this is a widespread method in analytical labs.

Figure 2 – B3LYP cited in scientific publications

While I’m mentioning the infamous B3LYP functional, a search about it in patents yields the following graph (Figure 4), most of which relate to the protection of photoluminescent or thermoluminescent molecules for light emitting devices; it appears that DFT calculations are used to provide the key features of their protection, such as HOMO-LUMO gap etc.

Figure 4 – Patents bearing B3LYP as part of their invention

So what about software? Most of the more recent patents in Figure 1 (2018 – 2022) lie in the realm of electronics, particularly the development of semiconductors, ceramical or otherwise, so it was safe to assume VASP could be a popular choice to that end, right? turns out that’s not necessarily the case since a patent search for VASP only accounts for about the 10% of all awarded patents (Figure 5).

Figure 5 – VASP in patents

I guess it’s safe to say by now that DFT has a significant impact in the industrial development, one could only expect it to keep on rising, however the advent of machine learning techniques and other artificial intelligence related methods promise an accelerated development. I went again to the patents database and this time searched for ‘machine learning development materials‘ (the term ‘development’ was deleted by the search engine, guess found it too obvious) and its rise is quite notorious, surpassing the frequency of DFT in patents (Figure 6), particularly in the past 5 years (2018 – 2022).

Figure 6 – The rise of the machines in materials development

I’m guessing in some instances DFT and ML will tend to go hand in hand in the industrial development process, but the timescales reachable by ML will only tend to grow, so I’m left with the question of what are we waiting for to make ML and AI part of the chemistry curricula? As computational chemistry teachers we should start talking about this points with our students and convince the head of departments to help us create proper courses or we risk our graduates to become niche scientists in a time when new skills are sought after in the IP.

__________________________________________________________________________________

Thanks again to Prof. Juan Carlos Sancho García at the University of Alicante, Spain, who asked me talk about the subject in front of his class, and to Prof. José Pedro Cerón-Carrasco from Cartagena for allowing me to talk about this and other topics at Centro Universitario de la Defensa. Thank you, guys! I look forward to meeting you again soon.

Exciton Energy Transfer-Talk at the Virtual Winter School of Comp.Chem. 2022


I’m very honored to have been invited to this edition of this long standing event, the Virtual Winter School of Computational Chemistry. In this talk I walk through the basics of what are excitons and how do they move or transfer across matter; and of course, a primer on how to calculate the energy transfer with Gaussian.

This is a very basic introduction but I hope someone finds it useful. Thanks to Henrique Castro for inviting me to take part of this experience and to all the professors and students involved in the organization. Don’t forget to go and check all the other fantastic talks, including one by Nobel Laureate and chemistry legend Prof. Roald Hoffmann, at the Virtual Winter School’s website: https://winterschool.cc/

Virtual Winter School on Computational Chemistry 2022


I’m very excited and honored to participate in this year’s Virtual Winter School on #CompChem. This event started back in 2015 and this year the list of participants includes Nobel Laureate and legend Roald Hoffmann. The topics will range from drug design to quantum chemistry on quantum computers. Additionally, two workshops will be given for ADF and Gaussian.

Aside from the teaching sessions there will also be some virtual social gatherings that promise to be a lot of fun. So don’t miss it next 21—25 of February 2022. Register here.

I will teach the tools to model Exciton Energy Transfer processes, a handy set of skills to work on the fields of photophysics, photosynthesis, or photochemistry of materials. We’ll review the concepts of excitons and the basic mechanisms by which they are originated and transferred.

Thanks to Henrique Castro from Rio de Janeiro for inviting me to be a part of this event which is a direct heir from the first electronic conferences organized by Profs. Bacharach and Rzepa. Here is the program.

Water splitting by proton to hydride umpolung—New paper in Chem.Sci.


The word ‘umpolung‘ is not used often enough in my opinion, and that’s a shame since this phenomenon refers to one of the most classic tropes or deus ex machina used in sci-fi movies—prominently in the Dr. Who lore*—and that is ‘reversing the polarity‘. Now, reversing the polarity only means that for any given dipole the positively charged part now acquires a negative charge, while the originally negatively charged part becomes positively charged, and thus the direction of the dipole moment is, well, reversed.

In chemistry, reversing the polarity of a bond is an even cooler matter because it means that atoms that typically behave as positively charged become negatively charged and react with other molecules accordingly. Such is the case of this new research conducted experimentally by Prof. Rong Shang at Hiroshima University and theoretically elucidated by Leonardo “Leo” Lugo, who currently works jointly with me and my good friend the always amazing José Oscar Carlos Jimenez-Halla at the University of Guanajuato, Mexico.

Production of molecular hydrogen from water splitting at room temperature is a remarkable feat that forms the basis of fuel cells in the search for cleaner sources of energy; this process commonly requires a metallic catalyst, and it has been achieved via Frustrated Lewis Pairs from Si(II), but so far the use of an intramolecular electron relay process has not been reported.

BPB – Figure 1

Prof. Rong Shang and her team synthesized an ortho-phenylene linked bisborane functionalized phosphine (Figure 1), and proved their stoichiometric reaction with water yielding H2 and phosphine oxide quantitatively at room temperature. During the reaction mechanism the umpolung occurs when a proton from the captured water molecule forms a hydride centered on the borane moiety of BPB. The reaction mechanism is shown in Figure 2.

According to the calculated mechanism, a water molecule coordinates to one of the borane groups via the oxygen atom, and the phosphorus atom later forms a hydrogen bond via their lone pair separating the water molecule into OH and H+, this latter migrates to the second borane and it is during this migration (marked TSH2 in Figure 2) where the umpolung process takes place; the natural charge of the hydrogen atom changes from positive to negative and stays so in the intermediate H3. This newly formed hydride reacts with the hydrogen atom on the OH group to form the reduction product H2, the final phosphine oxide shows a PO…B intramolecular forming a five membered ring which further stabilizes it.

This results are now available in Chemical Science, 2021, 12, 15603 DOI:10.1039/d1sc05135k. As always, I deeply thank Prof. Óscar Jiménez-Halla for inviting me to participate on this venture.


* Below there’s a cool compilation of the Reverse the Polarity trope found in Dr. Who:

XIX RMFQT – National Meeting on TheoPhysChem


The Mexican Meeting on Theoretical Physical Chemistry is a national staple of our local scientific discipline. The nineteenth edition had to be a virtual conference due to sanitary restrictions still enforced in Mexico. Nevertheless, this was a successful meeting in which we tried new things, such as a live broadcast via our new official YouTube channel and a Twitter poster session covered under the hashtag #RMFQTXIX.

Please browse the previous links (talks in Spanish, most Tweets are also in Spanish but some are available in English.) Twitter conferences are here to stay and the creativity from the participants will be key in moving them forward; unfortunately, most of us are still grounded in the traditional idea of a physical poster and that notion taken literally translates poorly to a Tweet. I wanted to embed some of the presented posters but I don’t want to leave people out and they were fortunately too many for them to fit in a blog post. So head on to Twitter and check the hashtags #RMFQTXIX and #CompChemMX and follow the official Twitter account for the RMFQT.

A big shout-out to the staff, PhD students Jessica Arcudia and Gustavo Mondragón for keeping up the live sessions and online broadcast. The future of Mexican CompChem is in safe hands!

Fixing the error: Bad data into FinFrg


I found this error in the calculation of two interacting fragments, both with unpaired electrons. So, two radicals interact at a certain distance and the full system is deemed as a singlet, therefore the unpaired electron on each fragment have opposite spins. The problem came when trying to calculate the Basis Set Superposition Error (BSSE) because in the Counterpoise method you need to assign a charge and multiplicity to each fragment, however it’s not obvious how to assign opposite spins.

The core of the problem is related to the guess construction; normally a Counterpoise calculation would look like the following example:

#p B3LYP/6-31G(d,p) counterpoise=2

-2,1 -1,2 -1,2
C(Fragment=1)        0.00   0.00   0.00
O(Fragment=2)        1.00   1.00   1.00
...

In which the first pair of charge-multiplicity numbers correspond to the whole molecule and the following to those of each fragment in increasing order of N (in this case, N = 2). So for this hypothetical example we have two anions (but could easily be two cations) each with an unpaired electron, yielding a complex of charge = -2 and a singlet multiplicity which implies those two unpaired electrons have opposite spin. But if the guess (the initial trial wavefunction from which the SCF will begin) has a problem understanding this then the title error shows up:

Bad data into FinFrg 
Error termination via Lnk1e ...

The solution to this problem is as simple as it may be obscure: Create a convenient guess wavefunction by placing a negative sign to the multiplicity of one of the fragments in the following example. You may then use the guess as the starting point of other calculations since it will be stored in the checkpoint file. By using this negative sign we’re not requesting a negative multiplicity, but a given multiplicity of opposite spin to the other fragment.

#p B3LYP/6-31G(d,p) guess=(only,fragment=2)

-2,1 -1,2 -1,-2 
C(Fragment=1)        0.00   0.00   0.00 
O(Fragment=2)        1.00   1.00   1.00 
...

This way, the second fragment will have the opposite spin (but the same multiplicity) as the first fragment. The only keyword tells gaussian to only calculate the guess wave function and then exit the program. You may then use that guess as the starting point for other calculations such as my failed Counterpoise one.

Submerged Reaction Energy Barriers


The energy of your calculated transition state (TS) is lower than that of the reagents. That’s gotta be an error right? Well, maybe not.

Typically, in classical transition state theory, we associate the reaction barrier to the energy difference between the reaction complex and the TS, in other words, we associate the reaction barrier to the relative energy of the TS. However, this isn’t always the case, since the TS isn’t always located at the barrier, which simply may not exist or may be a submerged one, i.e. the TS relative energy is negative with respect to the reaction complex. This leads to negative activation energies, but one must bear in mind that the activation energy is not equal to the relative energy of the TS but rather to the slope of the Arrhenius plot, which in turn comes from the Arrhenius equation given below.

k = Aexp(Ea/RT) 
or in logarithmic form
Lnk = LnA + (Ea/RT)

The Arrhenius plot is then the plot of Lnk vs T-1, with slope Ea

Caution is advised since the apparent presence of such a barrier may be due to a computational artifact rather than to the real kinetics taking place, that’s why an IRC calculation must follow a TS optimization in order to verify the truthfulness of the TS; keep in mind that in classical transition state theory, we’re ‘slicing‘ a multidimensional map along a carefully chosen reaction coordinate but this choice might not entirely be the right one, or even an existing one for that matter. I also recommend to change the level of theory, reconsider the reaction complex structure (because a hidden intermediate or complex may be lurking between reactants and TS, see figure 1) and fully verifying the thermochemistry of all components involved before asserting that any given reaction under study has one of these atypical barriers.

Geometry Optimizations for Excited States


Electronic excitations are calculated vertically according to the Frank—Condon principle, this means that the geometry does not change upon the excitation and we merely calculate the energy required to reach the next electronic state. But for some instances, say calculating not only the absorption spectra but also the emission, it is important to know what the geometry minimum of this final state looks like, or if it even exists at all (Figure 1). Optimizing the geometry of a given excited state requires the prior calculation of the vertical excitations whether via a multireference method, quantum Monte Carlo, or the Time Dependent Density Functional Theory, TD-DFT, which due to its lower computational cost is the most widespread method.

Most single-reference treatments, ab initio or density based, yield good agreement with experiments for lower states, but not so much for the higher excitations or process that involve the excitation of two electrons. Of course, an appropriate selection of the method ensures the accuracy of the obtained results, and the more states are considered, the better their description although it becomes more computationally demanding in turn.

Figure 1. The vertical excitation does not match the minimum on the excited state

In Gaussian 09 and 16, the argument to the ROOT keyword selects a given excited state to be optimized. In the following example, five excited states are calculated and the optimization is requested upon the second excited state. If no ROOT is specified, then the optimization would be carried out by default on the first excited state (Where L.O.T. stands for Level of Theory).

#p opt TD=(nstates=5,root=2) L.O.T.

Gaussian16 includes now the calculation of analytic second derivatives which allows for the calculation of vibrational frequencies for IR and Raman spectra, as well as transition state optimization and IRC calculations in excited states opening thus an entire avenue for the computation of photochemistry.

If you already computed the excited states and just want to optimize one of them from a previous calculation, you can read the previous results with the following input :

#p opt TD=(Read,Root=N) L.O.T. Density=Current Guess=Read Geom=AllCheck

Common problems. The following error message is commonly observed in excited state calculations whether in TD-DFT, CIS or other methods:

No map to state XX, you need to solve for more vectors in order to follow this state.

This message usually means you need to increase the number of excited states to be calculated for a proper description of the one you’re interested in. Increase the number N for nstates=N in the route section at higher computational cost. A rule of thumb is to request at least 2 more states than the state of interest. This message can also reflect the fact that during the optimization the energy ordering changes between states, and can also mean that the ground state wave function is unstable, i.e., the energy of the excited state becomes lower than that of the ground state, in this case a single determinant approach is unviable and CAS should be used if the size of the molecule allows it. Excited state optimizations are tricky this way, in some cases the optimization may cross from one PES to another making it hard to know if the resulting geometry corresponds to the state of interest or another. Gaussian recommends changing the step size of the optimization from the default 0.3 Bohr radius to 0.1, but obviously this will make the calculation take longer.

Opt=(MaxStep=10)

If the minimum on the excited state potential energy surface (PES) doesn’t exist, then the excited state is not bound; take for example the first excited state of the H2 molecule which doesn’t show a minimum, and therefore the optimized geometry would correspond to both H atoms moving away from each other indefinitely (Figure 2). Nevertheless, a failed optimization doesn’t necessarily means the minimum does not exist and further analysis is required, for instance, checking the gradient is converging to zero while the forces do not.

Figure 2. An unbound excited state with no minima ensures the dissociation of the system along the reaction coordinate
%d bloggers like this: