Category Archives: Computational Chemistry
The energy of your calculated transition state (TS) is lower than that of the reagents. That’s gotta be an error right? Well, maybe not.
Typically, in classical transition state theory, we associate the reaction barrier to the energy difference between the reaction complex and the TS, in other words, we associate the reaction barrier to the relative energy of the TS. However, this isn’t always the case, since the TS isn’t always located at the barrier, which simply may not exist or may be a submerged one, i.e. the TS relative energy is negative with respect to the reaction complex. This leads to negative activation energies, but one must bear in mind that the activation energy is not equal to the relative energy of the TS but rather to the slope of the Arrhenius plot, which in turn comes from the Arrhenius equation given below.
k = Aexp(Ea/RT) or in logarithmic form Lnk = LnA + (Ea/RT) The Arrhenius plot is then the plot of Lnk vs T-1, with slope Ea
Caution is advised since the apparent presence of such a barrier may be due to a computational artifact rather than to the real kinetics taking place, that’s why an IRC calculation must follow a TS optimization in order to verify the truthfulness of the TS; keep in mind that in classical transition state theory, we’re ‘slicing‘ a multidimensional map along a carefully chosen reaction coordinate but this choice might not entirely be the right one, or even an existing one for that matter. I also recommend to change the level of theory, reconsider the reaction complex structure (because a hidden intermediate or complex may be lurking between reactants and TS, see figure 1) and fully verifying the thermochemistry of all components involved before asserting that any given reaction under study has one of these atypical barriers.
Electronic excitations are calculated vertically according to the Frank—Condon principle, this means that the geometry does not change upon the excitation and we merely calculate the energy required to reach the next electronic state. But for some instances, say calculating not only the absorption spectra but also the emission, it is important to know what the geometry minimum of this final state looks like, or if it even exists at all (Figure 1). Optimizing the geometry of a given excited state requires the prior calculation of the vertical excitations whether via a multireference method, quantum Monte Carlo, or the Time Dependent Density Functional Theory, TD-DFT, which due to its lower computational cost is the most widespread method.
Most single-reference treatments, ab initio or density based, yield good agreement with experiments for lower states, but not so much for the higher excitations or process that involve the excitation of two electrons. Of course, an appropriate selection of the method ensures the accuracy of the obtained results, and the more states are considered, the better their description although it becomes more computationally demanding in turn.
In Gaussian 09 and 16, the argument to the ROOT keyword selects a given excited state to be optimized. In the following example, five excited states are calculated and the optimization is requested upon the second excited state. If no ROOT is specified, then the optimization would be carried out by default on the first excited state (Where L.O.T. stands for Level of Theory).
#p opt TD=(nstates=5,root=2) L.O.T.
Gaussian16 includes now the calculation of analytic second derivatives which allows for the calculation of vibrational frequencies for IR and Raman spectra, as well as transition state optimization and IRC calculations in excited states opening thus an entire avenue for the computation of photochemistry.
If you already computed the excited states and just want to optimize one of them from a previous calculation, you can read the previous results with the following input :
#p opt TD=(Read,Root=N) L.O.T. Density=Current Guess=Read Geom=AllCheck
Common problems. The following error message is commonly observed in excited state calculations whether in TD-DFT, CIS or other methods:
No map to state XX, you need to solve for more vectors in order to follow this state.
This message usually means you need to increase the number of excited states to be calculated for a proper description of the one you’re interested in. Increase the number N for nstates=N in the route section at higher computational cost. A rule of thumb is to request at least 2 more states than the state of interest. This message can also reflect the fact that during the optimization the energy ordering changes between states, and can also mean that the ground state wave function is unstable, i.e., the energy of the excited state becomes lower than that of the ground state, in this case a single determinant approach is unviable and CAS should be used if the size of the molecule allows it. Excited state optimizations are tricky this way, in some cases the optimization may cross from one PES to another making it hard to know if the resulting geometry corresponds to the state of interest or another. Gaussian recommends changing the step size of the optimization from the default 0.3 Bohr radius to 0.1, but obviously this will make the calculation take longer.
If the minimum on the excited state potential energy surface (PES) doesn’t exist, then the excited state is not bound; take for example the first excited state of the H2 molecule which doesn’t show a minimum, and therefore the optimized geometry would correspond to both H atoms moving away from each other indefinitely (Figure 2). Nevertheless, a failed optimization doesn’t necessarily means the minimum does not exist and further analysis is required, for instance, checking the gradient is converging to zero while the forces do not.
The war against COVID-19 has been waged in many fronts. The computational chemistry community has done their share during this pandemic to put forward a cure, a vaccine, or a better understanding of the molecular mechanisms behind the human infection by the SARS-CoV-2 virus. As few vaccines show currently their heads and start making their way around the globe to stop the spreading, amidst a climate of disinformation, distrust and political upheaval, all of which pose several challenges yet to be faced aside from the technical and scientific ones.
This is by no means a comprehensive review of the literature, in fact, most of the cited literature herein was observed in Twitter under the #CompChem and #COVID combined hashtags; Summarizing the research by the CompChem community on COVID-19 related topics in a single blog-post would be near to impossible—I trust a book is being written on it as I type these lines.
The structural elucidation of the proteins associated to the SARS-CoV-2 virus is probably the first step required in designing chemical compounds capable of modifying their functions and altering their life-cycle without altering the biochemistry of the hosts. The Coronavirus Structural Taskforce has elucidated the structure of 28 proteins of SARS-CoV-2 aside from the 300+ proteins from the previous SARS-CoV virus using the tools from the FoldIt at home game based on the Rosetta program to heuristically predict the structure of these proteins. Structure based drug design rely on the knowledge of the structure of the active site (hence the name), but in the case of newly discovered proteins for which homology modeling is not entirely feasible, a ligand-based approach named D3Similarity was developed early in the pandemic for identifying the possible active sites by the group of Prof. Zhijian Xu. Mapping of the of the viral genome and proteome was also achieved early on during the first dates of lockdown in the American continent. The information was readily made available and usable for further studies which prompts another challenge: the rapid dissemination, review and evaluation of information to make scientifically sound claims and make data-based decisions. In this regard, the role of preprints cannot be stressed enough. Without a rapid communication, scientific results cannot generate a much needed critical mass to turn all these data into knowledge. As evidenced by the vast majority of the links present in this post, ChemRXiv from the ACS served the much needed function to gather, link and put the data for scientific evaluation out there in order to accelerate the discovery of solutions to the various steps of the virus’ reproductive cycle through various strategies.
The role of supercomputing has been paramount worldwide to the various efforts made in CompChem (read the C&EN piece) in various fronts from structural elucidation, such as the AI driven structural modelling of spike proteins and their infection mechanism led by Prof. Rommie Amaro (UCSD) and Dr. Arvind Ramanathan which was celebrated by the Bell Prize, to development of vaccines. Many Molecular Dynamics simulations have been performed on potential inhibitors of proteins such as the spike protein, in some cases these simulations coupled with cryo-EM microscopy allowed for the elucidation of the hinging mechanism of these spike proteins, their thermodynamic properties, and all atoms-simulations assessed the rigidity of the receptor as the cause of its infectivity. Still, owning these computing resources isn’t always cost effective; that’s why there have been outsourced to companies such as Amazon web services as Pearlman did for the QM/DFT calculations of the binding energy of several drug candidates for the inhibition of the virus’ main protease (MPro). Many other CADD studies are available (here, here, and here). Researchers from all around the world can chip in and join the effort by reaching out to the COVID-19 High Performance Computing Consortium (HPC) which brings together some of the most advanced computing systems to the hands of private and academic researchers with relevant projects aimed to the study of the virus. On the other side of the Atlantic, the Partnership for Advanced Computing in Europe (PRACE) also provides access to advanced computing services for research. As an effort to keep all the developing information curated and concentrated, the COVID-19 Molecular Structure and Therapeutics Hub was created to provide a community-driven data repository and curation service for molecular structures, models, therapeutics, and simulations related to computational research related to therapeutic opportunities.
As described above, molecular dynamics simulations are capital in the assessment of how drugs interact with proteins. But molecular dynamics can only do so much as they’re computing intensive so, the use of Polarizable Force Fields (PFF) algorithms to obtain results in the microseconds regime with high-resolution sampling methods which have been applied also to the modeling of the MPro protein; the phase space is sampled by different MD trajectories which are then tested and selected. Aside from classical simulations, artificial intelligence predictions and docking calculations, also quantum mechanical calculations have been employed in the search for the most intimate interactions governing the mechanisms of inhibition of proteins. In this front, a Fragment Molecular Orbital based analysis was carried out to find which residues in MPro interacted the most with a given inhibitor.
Virtual screening is at the heart of the computationally aided drug discovery process, specially high-throughput virtual screening such as the one performed by the group of Andre Fischer at Basel, in which 11 potential drugs were narrowed from a pool of over 600 million compounds that were analyzed as potential protease inhibitors. Repurposing of antiviral drugs, and other entry-inhibiting compounds, is also a major avenue explored in the search for treatments; in the linked study by Shailly Tomar et al. antiviral drugs which are also anti inflammatory are believed to take care of lung inflammation and injury associated to the infection at the same time they tend to disrupt the virus’ infection mechanism. The comeback of Virtual Reality can make virtual screening more cooperative even during lockdown conditions and more ‘tangible’ as the company Nanome has proven with their COVID-19 Town Hall meetings which aim to the modeling of proteins in 3D space. Aside from the de novo and repurposing efforts, the search for peptides against infection by SARS-CoV-2 was an important topic (here and here). More recently, Skariyachan and Gopal turn to natural products from herbal origins for their virtual screening (molecular docking and dynamics). In their perspective the chemical complexity achieved through biosynthesis can overcome the bottleneck of chemical discovery while at the same time turning to the ancient practices of herbal remedies described in Ayurveda. Other researchers like Manish Manish have also turned to libraries of 500,000+ natural compounds to find potential drugs for MPro.
The year is coming to an end but not the pandemic in any way. Now, with the advent of new strains, and the widespread vaccination effort put in place, it is more important than ever to keep the fight strong in our labs but also in our personal habits and responsibilities—the same advices that were given at the beginning of the year are still in effect today and will continue to be for the months to come. I want to wish everyone who reads this a happy holiday season, but above all I want to pay a small tribute to the scientists working relentlessly in one of the largest coordinated scientific efforts in modern history, one that can only be compared to the Moon landing or the Manhattan Project; to those scientists and all the healthcare personnel, may you find rest soon, may your efforts never go unnoticed: Thank you for your service.
Molecular Orbitals (MOs) are linear combinations of Atomic Orbitals (AOs), which in turn are linear combinations of other functions called ‘basis functions’. A basis, or more accurately a basis set, is a collection of functions which obey a set of rules (such as being orthogonal to each other and possibly being normalized) with which all AOs are constructed, and although these are centered on each atomic nucleus, the canonical way in which they are combined yield delocalized MOs; in other words, an MO can occupy a large space spanning several atoms at once. We don’t mind this expansion across a molecule, but what about between two molecules? Calculating the interaction energy between two or more molecular fragments leads to an artificial extra–stabilization term that stems from the fact that electrons in molecule 1 can occupy AO’s (or the basis functions which form them) centered on atoms from molecule 2.
Fundamentally, the interaction energy of any A—B dimer, Eint, is calculated as the energy difference between the dimer and the separately calculated energies for each component (Equation 1).
Eint = EAB – EA – EB (1)
However the calculation of Eint by this method is highly sensitive to the choice of basis set due to the Basis Set Superposition Error (BSSE) described in the first paragraph. The BSSE is particularly troublesome when small basis sets are used, due to the poor description of dispersion interactions but treating this error by just choosing a larger basis set is seldom useful for systems of considerable sizes. The Counterpoise method is a nifty correction to equation 1, in which EA and EB are calculated with the basis set of A and B respectively, i.e., only in EAB a larger basis set (that of A and B simultaneously) is used. The Counterpoise method calculates each component with the AB basis set (Equation 2)
EintCP = EABAB – EAAB– EBAB (2)
where the superscript AB means the whole basis set is used. This is accomplished by using ‘ghost‘ atoms with no nuclei and no electrons but empty basis set functions centered on them.
In Gaussian, BSSE is calculated with the Counterpoise method developed by Boys and Simon. It requires the keyword Counterpoise=N where N is the number of fragments to be considered (for an A—B system, N=2). Each atom in the coordinates list must be specified to which fragment pertains; additionally, the charge and multiplicity for each fragment and the whole supermolecular ensemble must be specified. Follow the example of this hydrogen fluoride dimer.
%chk=HF2.chk #P opt wB97XD/6-31G(d,p) Counterpoise=2 HF dimer 0,1 0,1 0,1 H(Fragment=1) 0.00 0.00 0.00 F(Fragment=1) 0.00 0.00 0.70 H(Fragment=2) 0.00 0.00 1.00 F(Fragment=2) 0.00 0.00 1.70
For closed shell fragments the first line is straightforward but one must pay attention that the first pair of numbers in the charge multiplicity line correspond to the whole ensemble, whereas the folowing pairs correspond to each fragment in consecutive order. Fragments do not need to be specified contiguously, i.e., you don’t need to define all atoms for fragment 1 and after those the atoms for fragment 2, etc. They could be mixed and the program still assigns them correctly. Just as an example I typed wB97XD but any other method, DFT or ab initio, may be used; only semiempirical methods do not admit a BSSE calculation because they don’t make use of a basis set in the first place!
The output provides the corrected energy (in atomic units) for the whole system, as well as the BSSE correction (which added to the previous term yields the un-corrected energy of the system). Gaussian16 also provides these values in kcal/mol as ‘Complexation energies’ first raw (uncorrected) and then the corrected energy.
BSSE is always present and cannot be entirely eliminated because of the use of finite basis sets but it can be correctly dealt with if the Counterpoise method is included.
I have written about extracting information from excited state calculations but an important consideration when analyzing the results is the proper use of the keyword density.
This keyword let’s Gaussian know which density is to be used in calculating some results. An important property to be calculated when dealing with excited states is the change in dipole moment between the ground state and any given state. The Transition Dipole Moment is an important quantity that allows us to predict whether any given electronic transition will be allowed or not. A change in the dipole moment (i.e. non-zero) of a molecule during an electronic transition helps us characterize said transition.
Say you perform a TD-DFT calculation without the density keyword, the default will provide results on the lowest excited state from all the requested states, which may or may not be the state of interest to the transition of interest; you may be interested in the dipole moment of all your excited states.
Three separate calculations would be required to calculate the change of dipole moment upon an electronic transition:
1) A regular DFT for the ground state as a reference
2) TD-DFT, to calculate the electronic transitions; request as many states as you need/want, analyze it and from there you can see which transition is the most important.
3) Request the density of the Nth state of interest to be recovered from the checkpoint file with the following route section:
# TD(Read,Root=N) LOT Density=Current Guess=Read Geom=AllCheck
replace N for the Nth state which caught your eye in step number 2) and LOT for the Level of Theory you’ve been using in the previous steps. That should give you the dipole moment for the structure of the Nth excited state and you can compare it with the one in the ground state calculated in 1). Again, if density=current is not used, only properties of N=1 will be printed.
This is a guest post by our very own Gustavo “Gus” Mondragón whose work centers around the study of excited states chemistry of photosynthetic pigments.
When you’re calculating excited states (no matter the method you’re using, TD-DFT, CI-S(D), EOM-CCS(D)) the analysis of the orbital contributions to electronic transitions poses a challenge. In this post, I’m gonna guide you through the CI-singles excited states calculation and the analysis of the electronic transitions.
I’ll use adenine molecule for this post. After doing the corresponding geometry optimization by the method of your choice, you can do the excited states calculation. For this, I’ll use two methods: CI-Singles and TD-DFT.
The route section for the CI-Singles calculation looks as follows:
#p CIS(NStates=10,singlets)/6-31G(d,p) geom=check guess=read scrf=(cpcm,solvent=water)
adenine excited states with CI-Singles method
I use the same geometry from the optimization step, and I request only for 10 singlet excited states. The CPCP implicit solvation model (solvent=water) is requested. If you want to do TD-DFT, the route section should look as follows:
#p FUNCTIONAL/6-31G(d,p) TD(NStates=10,singlets) geom=check guess=read scrf=(cpcm,solvent=water)
adenine excited states with CI-Singles method
Where FUNCTIONAL is the DFT exchange-correlation functional of your choice. Here I strictly not recommend using B3LYP, but CAM-B3LYP is a noble choice to start.
Both calculations give to us the excited states information: excitation energy, oscillator strength (as f value), excitation wavelength and multiplicity:
Excitation energies and oscillator strengths:
Excited State 1: Singlet-A 6.3258 eV 196.00 nm f=0.4830 <S**2>=0.000
11 -> 39 -0.00130
11 -> 42 -0.00129
11 -> 43 0.00104
11 -> 44 -0.00256
11 -> 48 0.00129
11 -> 49 0.00307
11 -> 52 -0.00181
11 -> 53 0.00100
11 -> 57 -0.00167
11 -> 59 0.00152
11 -> 65 0.00177
The data below corresponds to all the electron transitions involved in this excited state. I have to cut all the electron transitions because there are a lot of them for all excited states. If you have done excited states calculations before, you realize that the HOMO-LUMO transition is always an important one, but not the only one to be considered. Here is when we calculate the Natural Transition Orbitals (NTO), by these orbitals we can analyze the electron transitions.
For the example, I’ll show you first the HOMO-LUMO transition in the first excited state of adenine. It appears in the long list as follows:
35 -> 36 0.65024
The 0.65024 value corresponds to the transition amplitude, but it doesn’t mean anything for excited state analysis. We must calculate the NTOs of an excited state from a new Gaussian input file, requesting from the checkpoint file we used to calculate excited states. The file looks as follows:
#p SP geom=allcheck guess=(read,only) density=(Check,Transition=1) pop=(minimal,NTO,SaveNTO)
I want to say some important things right here for this last file. See that no level of theory is needed, all the calculation data is requested from the checkpoint file “adenine.chk”, and saved into the new checkpoint file “adNTO1.chk”, we must use the previous calculated density and specify the transition of interest, it means the excited state we want to analyze. As we don’t need to specify charge, multiplicity or even the comment line, this file finishes really fast.
After doing this last calculation, we use the new checkpoint file “adNTO1.chk” and we format it:
formchk -3 adNTO1.chk adNTO1.fchk
If we open this formatted checkpoint file with GaussView, chemcraft or the visualizer you want, we will see something interesting by watching he MOs diagram, as follows:
We can realize that frontier orbitals shows the same value of 0.88135, which means the real transition contribution to the first excited state. As these orbitals are contributing the most, we can plot them by using the cubegen routine:
cubegen 0 mo=homo adNTO1.fchk adHOMO.cub 0 h
This last command line is for plotting the equivalent as the HOMO orbital. If we want to plot he LUMO, just change the “homo” keyword for “lumo”, it doesn’t matter if it is written with capital letters or not.
You must realize that the Natural Transition Orbitals are quite different from Molecular Orbitals. For visual comparisson, I’ve printed also the molecular orbitals, given from the optimization and from excited states calculations, without calculating NTOs:
These are the molecular frontier orbitals, plotted with Chimera with 0.02 as the isovalue for both phase spaces:
The frontier NTOs look qualitatively the same, but that’s not necessarily always the case:
If we analyze these NTOs on a hole-electron model, the HOMO refers to the hole space and the LUMO refers to the electron space.
Maybe both orbitals look the same, but both frontier orbitals are quite different between them, and these last orbitals are the ones implied on first excited state of adenine. The electron transition will be reported as follows:
If I can do a graphic summary for this topic, it will be the next one:
NTOs analysis is useful no matter if you calculate excited states by using CIS(D), EOM-CCS(D), TD-DFT, CASSCF, or any of the excited states method of your election. These NTOs are useful for population analysis in excited states, but these calculations require another software, MultiWFN is an open-source code that allows you to do this analysis, and another one is called TheoDORE, which we’ll cover in a later post.
The Computational Chemistry Comparison and Benchmark DataBase (CCCBDB) from the National Institute of Standards and Technology (NIST) collects experimental and calculated thermochemistry—related values for 1968 common molecules, constituting a vast source of benchmarks for various kinds of calculations.
In particular, scaling factors for vibrational frequencies are very useful when calculating vibrational spectra. These scaling factors are arranged by levels of theory ranging from HF to MP2, DFT, and multireference methods. These scaling factors are obtained by least squares regression between experimental and calculated frequencies for a set of molecules at a given level of theory.
Aside from vibrational spectroscopy, a large number of structural and energetic properties can be found and estimated for small molecules. A quick formation enthalpy can be calculated from experimental data and then compared to the reported theoretical values at a large number of levels of theory. Moments of inertia, enthalpies, entropies, charges, frontier orbital gaps, and even some odd values or even calculations gone awry are pointed out for you to know if you’re dealing with a particularly problematic system. The CCCB Database includes tutorials and input/output files for performing these kinds of calculations around thermochemistry, making it also a valuable learning resource.
Every computational chemist should be aware of this site, particularly when collaborating with experimentalists or when carrying calculations trying to replicate experimental data. The vastness of the site calls for a long dive to explore their possibilities and capabilities for more accurate calculations.
The canonical molecular orbital depiction of an electronic transition is often a messy business in terms of a ‘chemical‘ interpretation of ‘which electrons‘ go from ‘which occupied orbitals‘ to ‘which virtual orbitals‘.
Natural Transition Orbitals provide a more intuitive picture of the orbitals, whether mixed or not, involved in any hole-particle excitation. This transformation is particularly useful when working with the excited states of molecules with extensively delocalized chromophores or multiple chromophoric sites. The elegance of the NTO method relies on its simplicity: separate unitary transformations are performed on the occupied and on the virtual set of orbitals in order to get a localized picture of the transition density matrix.
 R. L. Martin, J. Chem. Phys., 2003, DOI:10.1063/1.1558471.
After running a TD-DFT calculation with the keyword TD(Nstates=n) (where n = number of states to be requested) we need to take that result and launch a new calculation for the NTOs but lets take it one step at a time. As an example here’s phenylalanine which was already optimized to a minimum at the B3LYP/6-31G(d,p) level of theory. If we take that geometry and launch a new calculation with the TD(Nstates=40) in the route section we obtain the UV-Vis spectra and the output looks like this (only the first three states are shown):
Excitation energies and oscillator strengths: Excited State 1: Singlet-A 5.3875 eV 230.13 nm f=0.0015 <S**2>=0.000 42 -> 46 0.17123 42 -> 47 0.12277 43 -> 46 -0.40383 44 -> 45 0.50838 44 -> 47 0.11008 This state for optimization and/or second-order correction. Total Energy, E(TD-HF/TD-KS) = -554.614073682 Copying the excited state density for this state as the 1-particle RhoCI density. Excited State 2: Singlet-A 5.5137 eV 224.86 nm f=0.0138 <S**2>=0.000 41 -> 45 -0.20800 41 -> 47 0.24015 42 -> 45 0.32656 42 -> 46 0.10906 42 -> 47 -0.24401 43 -> 45 0.20598 43 -> 47 -0.14839 44 -> 45 -0.15344 44 -> 47 0.34182 Excited State 3: Singlet-A 5.9254 eV 209.24 nm f=0.0042 <S**2>=0.000 41 -> 45 0.11844 41 -> 47 -0.12539 42 -> 45 -0.10401 42 -> 47 0.16068 43 -> 45 -0.27532 43 -> 46 -0.11640 43 -> 47 0.16780 44 -> 45 -0.18555 44 -> 46 -0.29184 44 -> 47 0.43124
The oscillator strength is listed on each Excited State as “f” and it is a measure of the probability of that excitation to occur. If we look at the third one for this phenylalanine we see f=0.0042, a very low probability, but aside from that the following list shows what orbital transitions compose that excitation and with what energy, so the first line indicates a transition from orbital 41 (HOMO-3) to orbital 45 (LUMO); there are 10 such transitions composing that excitation, visualizing them all with canonical orbitals is not an intuitive picture, so lets try the NTO approach, we’re going to take excitation #10 for phenylalanine as an example just because it has a higher oscillation strength:
%chk=Excited State 10: Singlet-A 7.1048 eV 174.51 nm f=0.3651 <S**2>=0.000 41 -> 45 0.35347 41 -> 47 0.34685 42 -> 45 0.10215 42 -> 46 0.17248 42 -> 47 0.13523 43 -> 45 -0.26596 43 -> 47 -0.22995 44 -> 46 0.23277
Each set of NTOs for each transition must be calculated separately. First, copy you filename.chk file from the TD-DFT result to a new one and name it after the Nth state of interest as shown below (state 10 in this case). NOTE: In the route section, replace N with the number of the excitation of interest according to the results in filename.log. Run separately for each transition your interested in:
#chk=state10.chk #p B3LYP/6-31G(d,p) Geom=AllCheck Guess=(Read,Only) Density=(Check,Transition=N) Pop=(Minimal,NTO,SaveNTO) 0 1 --blank line--
By requesting SaveNTO, the canonical orbitals in the state10.chk file are replaced with the NTOs for the 10th excitation, this makes it easier to plot since most visualizers just plot whatever set of orbitals they read in the chk file but if they find the canonical MOs then one would need to do some re-processing of them. This is much more straightforward.
Now we format our chk files into fchk with the formchk utility:
formchk -3 filename.chk filename.fchk
formchk -3 state10.chk state10.fchk
If we open filename.fchk (the file where the original TD-DFT calculation is located) with GaussView we can plot all orbitals involved in excited state number ten, those would be seven orbitals from 41 (HOMO-3) to 47 (LUMO+2) as shown in figure 1.
If we now open state10.fchk we see that the numbers at the side of the orbitals are not their energy but their occupation number particular to this state of interest, so we only need to plot those with highest occupations, in our example those are orbitals 44 and 45 (HOMO and LUMO) which have occupations = 0.81186; you may include 43 and 46 (HOMO-1 and LUMO+1, respectively) for a much more complete description (occupations = 0.18223) but we’re still dealing with 4 orbitals instead of 7.
The NTO transition 44 -> 45 is far easier to conceptualize than all the 10 combinations given in the canonical basis from the direct TD-DFT calculation. TD-DFT provides us with the correct transitions, NTOs just paint us a picture more readily available to the chemist mindset.
NOTE: for G09 revC and above, the %OldChk option is available, I haven’t personally tried it but using it to specify where the excitations are located and then write the NTOs of interest into a new chk file in the following way, thus eliminating the need of copying the original chk file for each state:
NTOs are based on the Natural Hybrid orbitals vision by Löwdin and others, and it is said to be so straightforward that it has been re-discovered from time to time. Be that as it may, the NTO visualization provides a much clearer vision of the excitations occurring during a TD calculation.
Thanks for reading, stay home and stay safe during these harsh days everyone. Please share, rate and comment this and other posts.
As a continuation of our previous work on estimating pKa values from DFT calculations for carboxylic acids, we now present the complementary pKb values for amino groups by the same method, and the coupling of both methodologies for predicting the isoelectric point -pI- values of amino acids as a proof of concept.
Analogously to our work on pKa, we now used the Minimum Surface Electrostatic Potentia, VS,min, as a descriptor of the availability of Nitrogen’s lone pair and correlated it with the experimental basicity of a large number of amines, separated into three groups: primary, secondary and tertiary amines.
Interestingly, the correlation coefficient between experimental and calculated pKb values decreases in the following order: primary (R2 = 0.9519) > secondary (R2 = 0.9112) > tertiary (R2 = 0.8172). This could be due to steric effects, the change in s-character of the lone pair or just plain old selection bias. Nevertheless, there is a good correlation between both values and the resulting equations can predict the pKb value of an amino group within less of a unit, which is very good for a statistical method that does not require the calculation of a full thermodynamic cycle.
We then took thirteen amino acids (those without titratable side chains) and calculated simultaneously VS,min and VS,max for the amino and the carboxyl group (this latter with the use of equation 2 from our previous work published in Molecules MDPI) and the arithmetical average of both gave us their corresponding pI values with an agreement of less than one unit.
This work is now available at the Journal of Chemical Information and Modeling (DOI: 10.1021/acs.jcim.9b01173); as always a shoutout is due to the people working on it: Leonardo “Leo” Lugo, Gustavo “Gus” Mondragón and leading the charge Dr. Jacinto Sandoval-Lira.