The energy of your calculated transition state (TS) is lower than that of the reagents. That’s gotta be an error right? Well, maybe not.
Typically, in classical transition state theory, we associate the reaction barrier to the energy difference between the reaction complex and the TS, in other words, we associate the reaction barrier to the relative energy of the TS. However, this isn’t always the case, since the TS isn’t always located at the barrier, which simply may not exist or may be a submerged one, i.e. the TS relative energy is negative with respect to the reaction complex. This leads to negative activation energies, but one must bear in mind that the activation energy is not equal to the relative energy of the TS but rather to the slope of the Arrhenius plot, which in turn comes from the Arrhenius equation given below.
k = Aexp(Ea/RT) or in logarithmic form Lnk = LnA + (Ea/RT) The Arrhenius plot is then the plot of Lnk vs T-1, with slope Ea
Caution is advised since the apparent presence of such a barrier may be due to a computational artifact rather than to the real kinetics taking place, that’s why an IRC calculation must follow a TS optimization in order to verify the truthfulness of the TS; keep in mind that in classical transition state theory, we’re ‘slicing‘ a multidimensional map along a carefully chosen reaction coordinate but this choice might not entirely be the right one, or even an existing one for that matter. I also recommend to change the level of theory, reconsider the reaction complex structure (because a hidden intermediate or complex may be lurking between reactants and TS, see figure 1) and fully verifying the thermochemistry of all components involved before asserting that any given reaction under study has one of these atypical barriers.
Electronic excitations are calculated vertically according to the Frank—Condon principle, this means that the geometry does not change upon the excitation and we merely calculate the energy required to reach the next electronic state. But for some instances, say calculating not only the absorption spectra but also the emission, it is important to know what the geometry minimum of this final state looks like, or if it even exists at all (Figure 1). Optimizing the geometry of a given excited state requires the prior calculation of the vertical excitations whether via a multireference method, quantum Monte Carlo, or the Time Dependent Density Functional Theory, TD-DFT, which due to its lower computational cost is the most widespread method.
Most single-reference treatments, ab initio or density based, yield good agreement with experiments for lower states, but not so much for the higher excitations or process that involve the excitation of two electrons. Of course, an appropriate selection of the method ensures the accuracy of the obtained results, and the more states are considered, the better their description although it becomes more computationally demanding in turn.
In Gaussian 09 and 16, the argument to the ROOT keyword selects a given excited state to be optimized. In the following example, five excited states are calculated and the optimization is requested upon the second excited state. If no ROOT is specified, then the optimization would be carried out by default on the first excited state (Where L.O.T. stands for Level of Theory).
#p opt TD=(nstates=5,root=2) L.O.T.
Gaussian16 includes now the calculation of analytic second derivatives which allows for the calculation of vibrational frequencies for IR and Raman spectra, as well as transition state optimization and IRC calculations in excited states opening thus an entire avenue for the computation of photochemistry.
If you already computed the excited states and just want to optimize one of them from a previous calculation, you can read the previous results with the following input :
#p opt TD=(Read,Root=N) L.O.T. Density=Current Guess=Read Geom=AllCheck
Common problems. The following error message is commonly observed in excited state calculations whether in TD-DFT, CIS or other methods:
No map to state XX, you need to solve for more vectors in order to follow this state.
This message usually means you need to increase the number of excited states to be calculated for a proper description of the one you’re interested in. Increase the number N for nstates=N in the route section at higher computational cost. A rule of thumb is to request at least 2 more states than the state of interest. This message can also reflect the fact that during the optimization the energy ordering changes between states, and can also mean that the ground state wave function is unstable, i.e., the energy of the excited state becomes lower than that of the ground state, in this case a single determinant approach is unviable and CAS should be used if the size of the molecule allows it. Excited state optimizations are tricky this way, in some cases the optimization may cross from one PES to another making it hard to know if the resulting geometry corresponds to the state of interest or another. Gaussian recommends changing the step size of the optimization from the default 0.3 Bohr radius to 0.1, but obviously this will make the calculation take longer.
If the minimum on the excited state potential energy surface (PES) doesn’t exist, then the excited state is not bound; take for example the first excited state of the H2 molecule which doesn’t show a minimum, and therefore the optimized geometry would correspond to both H atoms moving away from each other indefinitely (Figure 2). Nevertheless, a failed optimization doesn’t necessarily means the minimum does not exist and further analysis is required, for instance, checking the gradient is converging to zero while the forces do not.
The war against COVID-19 has been waged in many fronts. The computational chemistry community has done their share during this pandemic to put forward a cure, a vaccine, or a better understanding of the molecular mechanisms behind the human infection by the SARS-CoV-2 virus. As few vaccines show currently their heads and start making their way around the globe to stop the spreading, amidst a climate of disinformation, distrust and political upheaval, all of which pose several challenges yet to be faced aside from the technical and scientific ones.
This is by no means a comprehensive review of the literature, in fact, most of the cited literature herein was observed in Twitter under the #CompChem and #COVID combined hashtags; Summarizing the research by the CompChem community on COVID-19 related topics in a single blog-post would be near to impossible—I trust a book is being written on it as I type these lines.
The structural elucidation of the proteins associated to the SARS-CoV-2 virus is probably the first step required in designing chemical compounds capable of modifying their functions and altering their life-cycle without altering the biochemistry of the hosts. The Coronavirus Structural Taskforce has elucidated the structure of 28 proteins of SARS-CoV-2 aside from the 300+ proteins from the previous SARS-CoV virus using the tools from the FoldIt at home game based on the Rosetta program to heuristically predict the structure of these proteins. Structure based drug design rely on the knowledge of the structure of the active site (hence the name), but in the case of newly discovered proteins for which homology modeling is not entirely feasible, a ligand-based approach named D3Similarity was developed early in the pandemic for identifying the possible active sites by the group of Prof. Zhijian Xu. Mapping of the of the viral genome and proteome was also achieved early on during the first dates of lockdown in the American continent. The information was readily made available and usable for further studies which prompts another challenge: the rapid dissemination, review and evaluation of information to make scientifically sound claims and make data-based decisions. In this regard, the role of preprints cannot be stressed enough. Without a rapid communication, scientific results cannot generate a much needed critical mass to turn all these data into knowledge. As evidenced by the vast majority of the links present in this post, ChemRXiv from the ACS served the much needed function to gather, link and put the data for scientific evaluation out there in order to accelerate the discovery of solutions to the various steps of the virus’ reproductive cycle through various strategies.
The role of supercomputing has been paramount worldwide to the various efforts made in CompChem (read the C&EN piece) in various fronts from structural elucidation, such as the AI driven structural modelling of spike proteins and their infection mechanism led by Prof. Rommie Amaro (UCSD) and Dr. Arvind Ramanathan which was celebrated by the Bell Prize, to development of vaccines. Many Molecular Dynamics simulations have been performed on potential inhibitors of proteins such as the spike protein, in some cases these simulations coupled with cryo-EM microscopy allowed for the elucidation of the hinging mechanism of these spike proteins, their thermodynamic properties, and all atoms-simulations assessed the rigidity of the receptor as the cause of its infectivity. Still, owning these computing resources isn’t always cost effective; that’s why there have been outsourced to companies such as Amazon web services as Pearlman did for the QM/DFT calculations of the binding energy of several drug candidates for the inhibition of the virus’ main protease (MPro). Many other CADD studies are available (here, here, and here). Researchers from all around the world can chip in and join the effort by reaching out to the COVID-19 High Performance Computing Consortium (HPC) which brings together some of the most advanced computing systems to the hands of private and academic researchers with relevant projects aimed to the study of the virus. On the other side of the Atlantic, the Partnership for Advanced Computing in Europe (PRACE) also provides access to advanced computing services for research. As an effort to keep all the developing information curated and concentrated, the COVID-19 Molecular Structure and Therapeutics Hub was created to provide a community-driven data repository and curation service for molecular structures, models, therapeutics, and simulations related to computational research related to therapeutic opportunities.
As described above, molecular dynamics simulations are capital in the assessment of how drugs interact with proteins. But molecular dynamics can only do so much as they’re computing intensive so, the use of Polarizable Force Fields (PFF) algorithms to obtain results in the microseconds regime with high-resolution sampling methods which have been applied also to the modeling of the MPro protein; the phase space is sampled by different MD trajectories which are then tested and selected. Aside from classical simulations, artificial intelligence predictions and docking calculations, also quantum mechanical calculations have been employed in the search for the most intimate interactions governing the mechanisms of inhibition of proteins. In this front, a Fragment Molecular Orbital based analysis was carried out to find which residues in MPro interacted the most with a given inhibitor.
Virtual screening is at the heart of the computationally aided drug discovery process, specially high-throughput virtual screening such as the one performed by the group of Andre Fischer at Basel, in which 11 potential drugs were narrowed from a pool of over 600 million compounds that were analyzed as potential protease inhibitors. Repurposing of antiviral drugs, and other entry-inhibiting compounds, is also a major avenue explored in the search for treatments; in the linked study by Shailly Tomar et al. antiviral drugs which are also anti inflammatory are believed to take care of lung inflammation and injury associated to the infection at the same time they tend to disrupt the virus’ infection mechanism. The comeback of Virtual Reality can make virtual screening more cooperative even during lockdown conditions and more ‘tangible’ as the company Nanome has proven with their COVID-19 Town Hall meetings which aim to the modeling of proteins in 3D space. Aside from the de novo and repurposing efforts, the search for peptides against infection by SARS-CoV-2 was an important topic (here and here). More recently, Skariyachan and Gopal turn to natural products from herbal origins for their virtual screening (molecular docking and dynamics). In their perspective the chemical complexity achieved through biosynthesis can overcome the bottleneck of chemical discovery while at the same time turning to the ancient practices of herbal remedies described in Ayurveda. Other researchers like Manish Manish have also turned to libraries of 500,000+ natural compounds to find potential drugs for MPro.
The year is coming to an end but not the pandemic in any way. Now, with the advent of new strains, and the widespread vaccination effort put in place, it is more important than ever to keep the fight strong in our labs but also in our personal habits and responsibilities—the same advices that were given at the beginning of the year are still in effect today and will continue to be for the months to come. I want to wish everyone who reads this a happy holiday season, but above all I want to pay a small tribute to the scientists working relentlessly in one of the largest coordinated scientific efforts in modern history, one that can only be compared to the Moon landing or the Manhattan Project; to those scientists and all the healthcare personnel, may you find rest soon, may your efforts never go unnoticed: Thank you for your service.
Molecular Orbitals (MOs) are linear combinations of Atomic Orbitals (AOs), which in turn are linear combinations of other functions called ‘basis functions’. A basis, or more accurately a basis set, is a collection of functions which obey a set of rules (such as being orthogonal to each other and possibly being normalized) with which all AOs are constructed, and although these are centered on each atomic nucleus, the canonical way in which they are combined yield delocalized MOs; in other words, an MO can occupy a large space spanning several atoms at once. We don’t mind this expansion across a molecule, but what about between two molecules? Calculating the interaction energy between two or more molecular fragments leads to an artificial extra–stabilization term that stems from the fact that electrons in molecule 1 can occupy AO’s (or the basis functions which form them) centered on atoms from molecule 2.
Fundamentally, the interaction energy of any A—B dimer, Eint, is calculated as the energy difference between the dimer and the separately calculated energies for each component (Equation 1).
Eint = EAB – EA – EB (1)
However the calculation of Eint by this method is highly sensitive to the choice of basis set due to the Basis Set Superposition Error (BSSE) described in the first paragraph. The BSSE is particularly troublesome when small basis sets are used, due to the poor description of dispersion interactions but treating this error by just choosing a larger basis set is seldom useful for systems of considerable sizes. The Counterpoise method is a nifty correction to equation 1, in which EA and EB are calculated with the basis set of A and B respectively, i.e., only in EAB a larger basis set (that of A and B simultaneously) is used. The Counterpoise method calculates each component with the AB basis set (Equation 2)
EintCP = EABAB – EAAB– EBAB (2)
where the superscript AB means the whole basis set is used. This is accomplished by using ‘ghost‘ atoms with no nuclei and no electrons but empty basis set functions centered on them.
In Gaussian, BSSE is calculated with the Counterpoise method developed by Boys and Simon. It requires the keyword Counterpoise=N where N is the number of fragments to be considered (for an A—B system, N=2). Each atom in the coordinates list must be specified to which fragment pertains; additionally, the charge and multiplicity for each fragment and the whole supermolecular ensemble must be specified. Follow the example of this hydrogen fluoride dimer.
%chk=HF2.chk #P opt wB97XD/6-31G(d,p) Counterpoise=2 HF dimer 0,1 0,1 0,1 H(Fragment=1) 0.00 0.00 0.00 F(Fragment=1) 0.00 0.00 0.70 H(Fragment=2) 0.00 0.00 1.00 F(Fragment=2) 0.00 0.00 1.70
For closed shell fragments the first line is straightforward but one must pay attention that the first pair of numbers in the charge multiplicity line correspond to the whole ensemble, whereas the folowing pairs correspond to each fragment in consecutive order. Fragments do not need to be specified contiguously, i.e., you don’t need to define all atoms for fragment 1 and after those the atoms for fragment 2, etc. They could be mixed and the program still assigns them correctly. Just as an example I typed wB97XD but any other method, DFT or ab initio, may be used; only semiempirical methods do not admit a BSSE calculation because they don’t make use of a basis set in the first place!
The output provides the corrected energy (in atomic units) for the whole system, as well as the BSSE correction (which added to the previous term yields the un-corrected energy of the system). Gaussian16 also provides these values in kcal/mol as ‘Complexation energies’ first raw (uncorrected) and then the corrected energy.
BSSE is always present and cannot be entirely eliminated because of the use of finite basis sets but it can be correctly dealt with if the Counterpoise method is included.
I have written about extracting information from excited state calculations but an important consideration when analyzing the results is the proper use of the keyword density.
This keyword let’s Gaussian know which density is to be used in calculating some results. An important property to be calculated when dealing with excited states is the change in dipole moment between the ground state and any given state. The Transition Dipole Moment is an important quantity that allows us to predict whether any given electronic transition will be allowed or not. A change in the dipole moment (i.e. non-zero) of a molecule during an electronic transition helps us characterize said transition.
Say you perform a TD-DFT calculation without the density keyword, the default will provide results on the lowest excited state from all the requested states, which may or may not be the state of interest to the transition of interest; you may be interested in the dipole moment of all your excited states.
Three separate calculations would be required to calculate the change of dipole moment upon an electronic transition:
1) A regular DFT for the ground state as a reference
2) TD-DFT, to calculate the electronic transitions; request as many states as you need/want, analyze it and from there you can see which transition is the most important.
3) Request the density of the Nth state of interest to be recovered from the checkpoint file with the following route section:
# TD(Read,Root=N) LOT Density=Current Guess=Read Geom=AllCheck
replace N for the Nth state which caught your eye in step number 2) and LOT for the Level of Theory you’ve been using in the previous steps. That should give you the dipole moment for the structure of the Nth excited state and you can compare it with the one in the ground state calculated in 1). Again, if density=current is not used, only properties of N=1 will be printed.
Prof. Mario Molina was awarded the Nobel Prize in Chemistry in 1995, the same year I started my chemistry education at the chemistry school from the National Autonomous University of Mexico, UNAM, the same school from where he got his undergraduate diploma. To be a chemistry student in the late nineties in Mexico had Prof. Molina as a sort of mythical reference, something to aspire to, a role model, the sort of representation the Latinx and other underrepresented communities still require and seldom get.
I saw him several times at UNAM, where he’d pack any auditorium almost once a year to talk about various research topics, but I remember distinctly the first time I sort of interacted with him. It was 1997 and I attended my first congress, the 5th North America Chemistry Congress. Minutes before the official inauguration which he was supposed to preside, I caught a glimpse of him in the hallways near the main conference room. Being only 19 years old, I thought it’d be a good idea to chase him, ask for his autograph and a picture. He was kind enough not to brush me off and took just a minute to shake my hand, sign my book of abstracts, and get his picture taken with me. But cameras back then relied on the user to place a roll of film correctly. I did not; so the picture, although it happened, it doesn’t exist. Because of this and other anecdotes, that congress cemented my love for chemistry. I never asked for a second picture in the few subsequent occasions I had the pleasure to hear him talk.
Prof. Molina was an advocate of green and sustainable sources of energies. His work predicted the existence of a hole in the ozone layer and his struggle brought change into the banning of CFCs and other substances which interfere with the replenishment of ozone in the sub-stratosphere. Today, his legacy remains but also do his pending battles in the quest for new policies that favor the use of green alternative forms of energy. May he rest in peace and may we continue his example.
This is a guest post by our very own Gustavo “Gus” Mondragón whose work centers around the study of excited states chemistry of photosynthetic pigments.
When you’re calculating excited states (no matter the method you’re using, TD-DFT, CI-S(D), EOM-CCS(D)) the analysis of the orbital contributions to electronic transitions poses a challenge. In this post, I’m gonna guide you through the CI-singles excited states calculation and the analysis of the electronic transitions.
I’ll use adenine molecule for this post. After doing the corresponding geometry optimization by the method of your choice, you can do the excited states calculation. For this, I’ll use two methods: CI-Singles and TD-DFT.
The route section for the CI-Singles calculation looks as follows:
#p CIS(NStates=10,singlets)/6-31G(d,p) geom=check guess=read scrf=(cpcm,solvent=water)
adenine excited states with CI-Singles method
I use the same geometry from the optimization step, and I request only for 10 singlet excited states. The CPCP implicit solvation model (solvent=water) is requested. If you want to do TD-DFT, the route section should look as follows:
#p FUNCTIONAL/6-31G(d,p) TD(NStates=10,singlets) geom=check guess=read scrf=(cpcm,solvent=water)
adenine excited states with CI-Singles method
Where FUNCTIONAL is the DFT exchange-correlation functional of your choice. Here I strictly not recommend using B3LYP, but CAM-B3LYP is a noble choice to start.
Both calculations give to us the excited states information: excitation energy, oscillator strength (as f value), excitation wavelength and multiplicity:
Excitation energies and oscillator strengths:
Excited State 1: Singlet-A 6.3258 eV 196.00 nm f=0.4830 <S**2>=0.000
11 -> 39 -0.00130
11 -> 42 -0.00129
11 -> 43 0.00104
11 -> 44 -0.00256
11 -> 48 0.00129
11 -> 49 0.00307
11 -> 52 -0.00181
11 -> 53 0.00100
11 -> 57 -0.00167
11 -> 59 0.00152
11 -> 65 0.00177
The data below corresponds to all the electron transitions involved in this excited state. I have to cut all the electron transitions because there are a lot of them for all excited states. If you have done excited states calculations before, you realize that the HOMO-LUMO transition is always an important one, but not the only one to be considered. Here is when we calculate the Natural Transition Orbitals (NTO), by these orbitals we can analyze the electron transitions.
For the example, I’ll show you first the HOMO-LUMO transition in the first excited state of adenine. It appears in the long list as follows:
35 -> 36 0.65024
The 0.65024 value corresponds to the transition amplitude, but it doesn’t mean anything for excited state analysis. We must calculate the NTOs of an excited state from a new Gaussian input file, requesting from the checkpoint file we used to calculate excited states. The file looks as follows:
#p SP geom=allcheck guess=(read,only) density=(Check,Transition=1) pop=(minimal,NTO,SaveNTO)
I want to say some important things right here for this last file. See that no level of theory is needed, all the calculation data is requested from the checkpoint file “adenine.chk”, and saved into the new checkpoint file “adNTO1.chk”, we must use the previous calculated density and specify the transition of interest, it means the excited state we want to analyze. As we don’t need to specify charge, multiplicity or even the comment line, this file finishes really fast.
After doing this last calculation, we use the new checkpoint file “adNTO1.chk” and we format it:
formchk -3 adNTO1.chk adNTO1.fchk
If we open this formatted checkpoint file with GaussView, chemcraft or the visualizer you want, we will see something interesting by watching he MOs diagram, as follows:
We can realize that frontier orbitals shows the same value of 0.88135, which means the real transition contribution to the first excited state. As these orbitals are contributing the most, we can plot them by using the cubegen routine:
cubegen 0 mo=homo adNTO1.fchk adHOMO.cub 0 h
This last command line is for plotting the equivalent as the HOMO orbital. If we want to plot he LUMO, just change the “homo” keyword for “lumo”, it doesn’t matter if it is written with capital letters or not.
You must realize that the Natural Transition Orbitals are quite different from Molecular Orbitals. For visual comparisson, I’ve printed also the molecular orbitals, given from the optimization and from excited states calculations, without calculating NTOs:
These are the molecular frontier orbitals, plotted with Chimera with 0.02 as the isovalue for both phase spaces:
The frontier NTOs look qualitatively the same, but that’s not necessarily always the case:
If we analyze these NTOs on a hole-electron model, the HOMO refers to the hole space and the LUMO refers to the electron space.
Maybe both orbitals look the same, but both frontier orbitals are quite different between them, and these last orbitals are the ones implied on first excited state of adenine. The electron transition will be reported as follows:
If I can do a graphic summary for this topic, it will be the next one:
NTOs analysis is useful no matter if you calculate excited states by using CIS(D), EOM-CCS(D), TD-DFT, CASSCF, or any of the excited states method of your election. These NTOs are useful for population analysis in excited states, but these calculations require another software, MultiWFN is an open-source code that allows you to do this analysis, and another one is called TheoDORE, which we’ll cover in a later post.
The Computational Chemistry Comparison and Benchmark DataBase (CCCBDB) from the National Institute of Standards and Technology (NIST) collects experimental and calculated thermochemistry—related values for 1968 common molecules, constituting a vast source of benchmarks for various kinds of calculations.
In particular, scaling factors for vibrational frequencies are very useful when calculating vibrational spectra. These scaling factors are arranged by levels of theory ranging from HF to MP2, DFT, and multireference methods. These scaling factors are obtained by least squares regression between experimental and calculated frequencies for a set of molecules at a given level of theory.
Aside from vibrational spectroscopy, a large number of structural and energetic properties can be found and estimated for small molecules. A quick formation enthalpy can be calculated from experimental data and then compared to the reported theoretical values at a large number of levels of theory. Moments of inertia, enthalpies, entropies, charges, frontier orbital gaps, and even some odd values or even calculations gone awry are pointed out for you to know if you’re dealing with a particularly problematic system. The CCCB Database includes tutorials and input/output files for performing these kinds of calculations around thermochemistry, making it also a valuable learning resource.
Every computational chemist should be aware of this site, particularly when collaborating with experimentalists or when carrying calculations trying to replicate experimental data. The vastness of the site calls for a long dive to explore their possibilities and capabilities for more accurate calculations.
Having a paper rejected is one of the certainties of academic life. While there are some strategies to decrease the probability of facing a rejection, today I want to focus on my tips to deal with them—particularly for the benefit of younger scientists.
There are two broad kinds of rejections: Desk Rejections and Rejections from reviewers. In any case, the best advice is never to take action after receiving the dreaded rejection letter. Take a day or two, then react accordingly with a cooler head. Remember, this isn’t about you it’s hard not to make it personal but trust me it isn’t.
The first kind, desk rejections, are provided directly from the chief or associated editors of the journal to which you submitted your work. They tend to be quick and rather uninformative except for maybe the incompatibility—to put it nicely—of your work with the scope of the journal. These are also sometimes the hardest to face since they make you feel your work is simply not good enough to be published; but they’re also the quickest and in the publish-or-perish scheme of things, time is key. After getting a desk rejection, if no other input is given, just try again; one tip—though not infallible—to chose a proper journal is to look at which journals are you citing in your own work and chose one with the highest frequency. Sometimes, editors might offer a transfer to another journal from the same publishing house; my advice is always say yes to transfers: the submission is made for you by the editorial staff, it sort of becomes recommended between the involved editors, and expedites the start-again process. Of course, a transfer does not mean you’re manuscript will get accepted but whenever offered there is a good chance the first editor thinks your work should be kept inside their editorial instead of risking you going to another publishing house. Appealing to a desk rejection is highly discouraged since it practically never works. Sure, you may think the editor will kick himself in the rear once you get the Nobel prize but telling them so, particularly in a colorful language, will not make them change their minds.
Rejections after peer review are trickier. If your manuscript went up to peer review, it means the editors in charge of it thought your work is publishable but of course it needs to be looked at by experts to make sure it was done in the right way with all or most things covered (you know what they say, two heads are better than one, try three!). Now, this kind of rejection takes longer, usually two or three weeks—sometimes even longer—but all things being fair, polite, and objective, they are also the most informative. Reviewers will try to find holes in your logic, flaws in your research, and when they find them they will not hold back their thoughts; you’re in for the hard truth. So of course this kind of rejection is also hard to take, makes you feel again like your work is not worthy, that you’re not worthy as a scientist. But the big advantage here is you now have a blueprint of things to fix in your manuscript: a set of experiments are missing? run them, key literature wasn’t cited? read it and cite it appropriately. Take peer review objectively but never dismiss it by trying to just go and submit it again to a different journal as is, for chances are you’ll get some of the same reviewers, and even if you don’t, it’s unethical to dismiss the advice of peers, they are your peers in the end, not your bosses but your peers, don’t loose sight of it. Also, it’s very frustrating for reviewers to find that authors managed to get published without paying the slightest attention to their suggestions. Appealing a peer review rejection is hard but doable and then you have to put on a scale what is it that you value the most: your paper in its original condition being published in that specific journal or fixing it and start again. An appeal upon a flat rejection is hardly ever won but it may well establish a conversation with other scientists (the referees) about their point of view on your work, just don’t think you’ve made instant buddies who will now coach you through academic life.
The peer review system is far from perfect, but if done properly it is still the best thing we’ve got. Some other alternatives are being tested nowadays to reduce biases like open reviews signed and published by reviewers themselves; double and even triple blind peer review (in the latter not even the editor knows the identities of authors or reviewers) but until proven useful we have to largely cope and adapt to single blind peer review (just play nice, people). In some instances the dreaded third reviewer appears, and even a fourth and a fifth. Since there are no written laws and I’m not aware of any journal specifying the number of referees to be involved in the handling of a manuscript there may be varied opinions among reviewers, so different as from ranging from accept to reject. This may be due to the editor thinking one or more of the reviewers didn’t do their job properly (in either direction) and then brings another one to sort of break the tie or outweigh the opinion of a clearly biased reviewer. If you think there are bias, consult with the editor if a new set of reviewers may be included to complete the process, more often than not they will say no but if you raise a good point they might feel compelled to do so.
Science is a process that starts at the library and ends at the libraryDr. Jesús Gracia-Mora, School of Chemistry UNAM ca. the nineteen nineties
These are truths we must learn from a young age. Any science project does not end at the lab but at the library, therefore I let my students—even the undergrads—do the submission process of their manuscripts along with me, and involve them in the peer review process (sometimes and to some limited extent even when I’m the reviewer) just so they now that getting a rejection letter is part of the process and should never be equated with the relative quality or self-worth of a scientist since that is hardly what the publication process looks at.
So, in a nutshell, if you got a rejection letter, get back on the proverbial saddle and try again. And again. And once again.