Democrats in the Senate stayed up all night talking about the perils of climate change. But while there’s hope that technology, changing consumer and business practices or new policies could finally turn the tide and slow or reverse climate change, there are also good reasons to think those efforts will fail.

- There isn’t enough research and development into ways of generating energy without emitting carbon dioxide. “The U.S. energy sector invests only 0.23 percent of its revenue in research and development, and federal R&D spending is only half of what it was in 1980,” says a new paper by non-profit centrist policy group Third Way.
- The price of fossil fuels doesn’t include the cost of environmental damage and climate change. Legislation, meanwhile, isn’t doing the trick when it comes to increasing the price. A cap and trade program is complicated, virtually impossible politically, and not working all that well in Europe. A carbon tax – even a small gasoline tax – won’t get adopted by Congress.
- Many countries still subsidize fossil fuels, including those in the Middle East where consumption is growing fastest. The International Monetary Fund puts the annual cost at $1.9 trillion (on a post-tax basis).
- China is determined to increase living standards with more cars, more power plants, and more everything. In 2012, the average Chinese emits 6.2 metric tons a year of carbon dioxide versus 17.6 metric tons for the average American. Closing even one-third of that gap (even with more energy efficient economy) will generate a lot more emissions.

The Global Energy Initiative, a non-profit group devoted to promoting clean energy and slowing climate change, asked a handful of economists – liberal and conservative — for their views on what to do about climate change and the replies are somewhat gloomy.

Former top Obama economic policy adviser and former Harvard University president Larry Summers lists three items: eliminate energy subsidies, more funding for basic energy research, and carbon taxes. “As a practical matter my guess is the world will produce non fossil fuel power in the next 25 years at today s fossil fuel prices or it will fail with respect to global climate change,” he says.

The iconoclastic Bjorn Lomberg, director of the Copenhagen Consensus Center and adjunct professor at Copenhagen Business School says: “The only way to move towards a long-term reduction in emissions is if green energy becomes much cheaper.” He supports suggestions to increase research and development 10-fold to $100 billion a year globally.

Tyler Cowen, professor of economics at George Mason University said: “The most likely scenario is that we will find out just how bad the climate change problem is slated to be.”

]]>If the interior of the Earth is not homogeneous, then this means that the speed of signals traveling inside the Earth is not homogeneous. When large earthquakes occur, they generate strong seismic waves: these are detected and recorded by seismographs all around the world. They provide raw data that can further be analyzed. Reconstructing the interior of the Earth by analyzing what is recorded at the surface is solving an “inverse problem”. When an earthquake occurs, a first inverse problem to solve is to localize the epicenter of the earthquake.

Earthquakes generate P-waves (pressure waves) and S-waves (shear waves).

S-waves are strongly damped when traveling in viscous media, and hence not recorded far from the epicenter. This provides evidence for a liquid interior, as well as information on the thickness of the crust. On the contrary P-waves travel throughout the Earth and can be recorded very far from the epicenter.

Inge Lehmann was a Danish mathematician. She worked at the Danish Geodetic Institute, and she had access to the data recorded at seismic stations around the world. She discovered the inner core of the Earth in 1936. At the time, it was known that the mantle surrounded the core. The seismic waves are traveling approximately at 10km/s in the mantle and 8km/s in the core. Hence, the waves are refracted when entering the core. This should mean the existence of an annulus region on the Earth, centered at the epicenter, where no seismic wave should be detected. But Inge Lehmann discovered that signals were recorded in the forbidden region. A piece of the puzzle was missing… She built a toy model (see figure) that could explain the observations and was later tested and adopted.

In this toy model she inserted an inner core in which the signals would travel at 8.8km/s.

If you analyze the law of refraction, namely $\frac{\sin\theta_1}{v_1}= \frac{\sin\theta_2}{v_2}$, then the equation may have no solution for $\theta_2$ if $v_1$ is smaller than $v_2$ and $\theta_1$ is sufficiently large. This means that if a wave arrives on the slow side, sufficiently tangentially to the separating surface between the two media, then it cannot enter the second medium. It is then reflected on the separating surface. Hence, seismic waves can be reflected on the inner core. This is why they could be detected in the forbidden regions.

The toy model appearing here had been completed from a figure of Inge Lehmann and illustrates the black reflected waves. Note that also some refracted waves (in brown) enter the forbidden regions.

]]>This level of cooperation of the world mathematical community has been without precedent. Of course, compared to what existed 20 years ago, technology makes it easier to collaborate across boundaries. But there is more to it: MPE2013 helped change the image of mathematics among students and the general public. Many activities sponsored by MPE2013 illustrated the role that mathematics plays not only in addressing the planetary challenges but also in discovering and understanding our planet, its interior dynamics and its movement in the solar system. Teachers have new material to provide exciting answers to the question: What is mathematics useful for? All this material is shared on the MPE website; it will be further enriched over the coming years and will be a lasting legacy of MPE2013—tangible evidence that collaboration is beneficial for our community.

The interest in MPE2013 among the research community has also been very important. As research mathematicians, we are captivated by mathematical problems, and MPE2013 has demonstrated that planetary issues lead to many new and challenging problems. In the framework of MPE2013, we have organized summer schools for young researchers. Of course, a researcher cannot be trained in a few weeks, but what has been accomplished is really a first step toward what should be a long-term goal. This is true especially in view of the fact that the problems related to planet Earth are extremely complex; the ingredients are all interconnected, they cannot be studied in isolation. As mathematicians, we have some experience building and analyzing models of complex systems, but we absolutely need to cooperate with other disciplines to capture the essence of the problems and improve our models so they are as faithful as possible to the real world, yet manageable in the context of mathematics. MPE2013 has exposed the immensity of the research field that needs to be explored.

“Mathematics of Planet Earth” needs to continue, and this is why MPE2013 will morph into MPE on January 1, 2014. MPE will maintain the momentum of multilevel collaborations (researchers and educators) within the world mathematical community. The foundation has been laid, MPE will take on the long-term task, including the training of new researchers and supporting collaborations with researchers in other scientific disciplines.

This contribution is the last official post of the MPE2013 Daily Blog. MPE will have its own blog, which will appear on a less regular basis. Please contact us when you want to report on new developments of MPE.

Hans Kaper and Christiane Rousseau

]]>For the English blog, we formed a team, and each member of the team took responsibility for providing posts one day a week. The team was chaired by Hans Kaper, who was also responsible of posting the blog entries. Except for short periods when he was traveling, Hans posted each entry: this included dealing with the formulas and images whenever there were some, and Hans would always take the time to do some copy-editing. We are extremely grateful to him for the fantastic job he accomplished, and I use the opportunity of this post today to thank him on behalf of the readers of our blog. Many thanks also to the other members of the editorial team: Estelle Basor, Brian Conrey, Jim Crowley, Bogdan Vernescu, and Kent Morrison, who also helped with the posting of the blogs.

I cannot mention the English blog without also mentioning the French blog. The title of this blog was “Un jour, une brève”, with a short “story” on Planet Earth every working day. The description was the following: The French blog aims at publishing one text a day (except on weekends) in connection to the themes of “Mathematics for the Planet Earth”. These will be very short notices, say half a page each (in principle in French), without any technical details, which are intended to be read by a broad audience (that may include pre-university students). The goal is two-fold: we wish to explain on the one hand how mathematics can bring some useful information and on the other hand how the mathematical activity is supplied with new problems, new questions raised by the surrounding world. They also brilliantly won their bet with high-quality contributions and several thousands hits a day! And their posts will be published in a volume sometimes next year. In the name of MPE2013, we would like to thank their fantastic executive team: Martin Andler, Liliane Bel, Sylvie Benzoni, Thierry Goudon, Cyril Imbert, Pierre Pansu and Antoine Rousseau, which worked with an efficient editorial team.

Some countries also presented a blog with a somewhat lower frequency. We mention, for instance, the blog of MPE Australia. To all these countries we express our congratulations and thanks.

Christiane Rousseau

]]>One of the challenges for a mathematician interested in this topic is the range of biological questions that are associated with this area. The concept of evolution—the change in inherited characteristics of populations over successive generations—affects every level of biological organization, from the molecular to organismal and species level. It is also associated with a variety of questions about how and why humans have evolved to what we are today (evolutionary neuroscience, physiology, psychology), as well as in our understanding of health and disease (evolutionary medicine). Since diversity of life on Earth is crucial to human well-being and sustainable development, evolution is also highly connected to the impacts of climate change. This goes to show the importance of fully comprehending fundamental evolutionary mechanisms.

The driving processes of evolution—mutation, genetic drift and natural selection—are, independently, relatively easy to understand. However, when combined they lead to different phenomena, and it is remarkably tricky to unravel the role of the different evolutionary causes from their signatures. At the molecular level, most of the complexity is due to exchanges of genetic material (recombination, gene duplication, gene swapping, etc.). At the organismal/ecological level the interactions between the species (food webs, predator-prey systems, specialists vs. generalists) or between individuals with different organizational/social roles (cooperators, defectors, etc.) lead to complex dynamics of population structures.

All of these issues were extensively discussed in the six workshops held at the CRM from August to December, 2013:

1) “Random Trees” focused on stochastic techniques for analyzing random tree structures;

2) “Mathematics for an Evolving Biodiversity” discussed probabilistic and statistical methodologies for drawing inferences from contemporary biodiversity;

3) “Mathematics of Sequence Evolution” presented computational approaches to investigation of function and structure of genetic sequences;

4) A minicourse on “Theoretical and Applied Tools in Population and Medical Genomics” gave an introduction to modern population genetics and genomics;

5) “Coalescent Theory” focused on the probabilistic techniques for reconstructing evolutionary relationships using a backwards in time approach; and

6) “Biodiversity and Environment — Viability and Dynamic Games Perspectives” combined biological, economical, social and interdisciplinary perspectives in mathematical modeling of individual or species interactions and their consequences for biodiversity and the environment.

A sequence of special lectures were given during the term by Aisenstadt chairs: David Aldous (UC Berkeley) and Martin Nowak (Harvard), as well as by Clay senior scholar: Bob Griffiths (Oxford). Abstracts and slides of the presentations can be found **here**.

What was apparent to anyone following all of the above workshops was the varied combination of approaches from distinct scientific disciplines: genomics, ecology, economics, computational biology, statistical genetics, and bioinformatics. Given the production and analysis of massive environmental, genetic and genomic data, it is clear that mathematical techniques are extremely useful in the advancement of these scientific areas. As randomness plays a prominent role in evolutionary processes, stochastic processes and random combinatorial objects are key players in its analysis and development. For young mathematicians interested in the area I would highly recommend a solid background in probability and stochastic processes, and some practice in simulating random processes.

Lea Popovic

Dept of Mathematics and Statistics

Concordia University

Haemostasis results in the formation of a blood clot at the injury site which stops the bleeding. The mechanism is based on a complex interaction between four different systems: the vascular system, blood cells, the coagulation pathways, and fibrinolysis. Malfunctions or changes in these systems can result in imbalance and lead to either bleeding or thrombotic disorders. Thrombosis is a life-threatening clot formation that can be caused by numerous diseases and conditions such as atherosclerosis, trauma, stroke, infarction, cancer, sepsis, surgery and many others. Thrombosis is the leading immediate cause of mortality and morbidity in modern society and a major cause for complications and occasionally death in people admitted to hospitals. Anti-coagulation medicaments that are usually administered to such patients carry a serious risk of bleeding with potentially fatal consequences. This explains the necessity of further studies of haemostasis.

During the last few decades mathematics has played an important role in the studies and analysis of blood clotting in vitro and in vivo. Each of the systems involved has been extensively modeled and analyzed using mathematical tools. This has enabled posing, evaluation and justification of biological hypothesis. First of all, the knowledge of fluid flows and hydrodynamics was used to model blood as a homogenous viscous fluid, its flow in various structures of vessels and its non-Newtonian properties. These models are usually based on Navier-Stokes equations. Furthermore, as the non-Newtonian properties of blood originate from blood cells which are suspended in blood plasma, the blood was modeled as a complex fluid in which individual cells were described. Such models allow studying of cellular interaction, the behavior and distribution of cell populations in flow. In order to describe the complex structures and behavior of individual blood cells, many cell models were developed and compared to experimentally observed characteristics and behavior, especially for erythrocytes. An example of such model is given in [1].

The complex coagulation pathways which include more than 50 proteins and their interactions were extensively modeled and analyzed by systems of partial differential equations (PDEs). The systems describe concentrations of proteins, their diffusion and reactions in vitro or in vivo (in flow). A few models were done to describe and study equally complex regulatory network of platelet interactions and the corresponding intracellular signalling.

Formation of blood clot in vivo consists of two main processes – platelet aggregation and blood coagulation. The former results in the formation of a platelet aggregate, while the latter ends by fibrin polymerization and the formation of a fibrin net. The two formations influence each other and enable the blood clot formation. The flow velocity is reduced in the platelet aggregate and protects the protein concentrations from being taken by the flow, thus enabling the coagulation reactions to occur in its core. The fibrin net which forms inside the platelet aggregate reinforces the aggregate, allowing it to grow to a necessary size and to endure the pressure from the flow.

The approaches to modeling blood clotting in flow can be divided in three main groups: continuous, discrete and hybrid models. The continuous models use mathematical analysis and systems of PDEs to describe both, the coagulation reactions [2] and the flow. In these models, blood cells (platelets) are also modeled in terms of concentrations, while the clot can be described as a part of the fluid with significantly increased viscosity. Such models correctly describe flow properties and protein concentrations in the flow. However, they are unable to capture mechanical properties and possible rupture of the clot which originate from cell-cell interactions. The second approach is to model both, the flow and individual cells, by discrete methods. There are various discrete methods that can be used to model flow, varying from purely simulation techniques to methods that are lattice discretization of hydrodynamic equations. Although suitable for description of individual cells and their interactions, such methods often do not model correctly the concentrations of proteins and their diffusion in flow.

The third group consists of hybrid models, which combine continuous and discrete methods in an attempt to use their individual strengths and give a more suitable description of a complex phenomenon, such as blood clotting. Within hybrid models various combinations of methods are possible. An example is given in [3], where flow of blood plasma and platelets suspended in it is modeled with a discrete method called Dissipative Particle Dynamics Method (same as in [1]). The method is used to model fluid flows because it is able to correctly reproduce hydrodynamics. As platelet aggregate demonstrates elastic properties, the interactions between platelets in the model, i.e., platelet adhesion, is described by Hooke’s law. A difference is made between the weak GPIba platelet connection and the stronger connection due to platelet activation. The continuous part of the model describes fibrin concentration and diffusion in flow. The model was used to study how platelet aggregate influences and protects protein concentrations from flow. Additionally, the model has shown a possible mechanism by which platelet clot stops growing:

- In the beginning of clot growth, platelets aggregate at the injury site due to weak connections (Figure 2, a). The injury site is modelled as several platelets attached to the vessel wall. They initiate clot growth. Since the flow velocity is sufficiently high, the concentration of fibrin remains low.
- The platelet clot continues to grow due to weak connections and the flow speed inside it decreases. It makes it possible for the coagulation reaction to start, and fibrin concentration gradually increases (Figure 2, b).
- This process continues while the clot becomes sufficiently large (Figures 2, c, d). Fibrin covers a part of the clot and strong platelet connections appear inside it.
- Flow pressure exerts mechanical stresses on the clot and weak connections can rupture. In this case the clot breaks and its outer part is removed by the flow (Figure 2, e).
- Its remaining part is covered by fibrin and thus it cannot attach new platelets. The final clot form is shown in (Figure 2, f).

The model described in [3] investigates interactions between platelet aggregate and fibrin clot. However, the overly simplified model of blood coagulation pathways results in a less correct shape of the final clot. Nevertheless, the indications of that model are confirmed by a model with a more complete description of blood coagulation pathways which also produces a more realistic final clot shape (Figure 3).

The hybrid approaches show a great potential for modelling complex phenomena. They are suitable for multiscale modelling and it can be expected that they will, in the near future, offer new insights in many biological processes of great interest.

References:

[1] N. Bessonov, E. Babushkina, S.F. Golovashchenko, A. Tosenberger, F. Ataullakhanov, M. Panteleev, A. Tokarev, V. Volpert, Numerical Simulations of Blood Flows With Non-uniform Distribution of Erythrocytes and Platelets, Russian Journal of Numerical Analysis and Mathematical Modelling, 2013, Vol. 28, no. 5, 443-458.

[2] Y.V. Krasotkina, E.I. Sinauridze, F.I. Ataullakhanov, Spatiotemporal Dynamics of Fibrin Formation and Spreading of Active Thrombin Entering Non-recalcified Plasma by Diffusion, Biochimica et Biophysica Acta, 1474 (2000), 337-345.

[3] A. Tosenberger, F. Ataullakhanov, N. Bessonov, M. Panteleev, A. Tokarev, V. Volpert, Modelling of clot growth in flow with a DPD-PDE method, Journal of Theoretical Biology 337, 2013, pp. 30-41.

Alen Tosenberger

Institut Camille Jordan

Université Claude Bernard Lyon 1, France

tosenberger@math.univ-lyon1.fr

dracula.univ-lyon1.fr

Several areas of mathematics play fundamental roles in NWP, including mathematical models and their associated numerical algorithms, computational nonlinear optimization in very high dimension, huge datasets manipulation, and parallel computation. Even after decades of active research with the increasing power of supercomputers, the forecast skill of numerical weather models extends to about only six days. Improving the current model and developing new models for NWP have always been an active area of research. Operational weather and climate models are based on Navier-Stokes equations coupled with various interactive earth components such as ocean, land terrain, and water cycles. Many models use a latitude–longitude spherical grid. Its logically rectangular structure, orthogonality, and symmetry properties, make it relatively straightforward to obtain various desirable, accuracy-related, properties. On the other hand, the rapid development in the new technology of massively parallel computation platforms constantly renew the impetus to investigate better mathematical models using traditional or alternative spherical grids. Interested readers are referred to a recent survey paper in Quarterly Journal of the Royal Meteorological Society (Vol. 138: 1-26).

Initial conditions must be generated before one can compute a solution for weather prediction. The process of entering observation data into the model to generate initial conditions is called data assimilation. Its goal is to find an estimate of the true state of the weather based on observation (e.g. sensor data) and prior knowledge (e.g. mathematical models, system uncertainties, and sensor noise). A family of variational methods called 4D-Var is widely used in NWP for data assimilation. In this approach, a cost function based on initial and sensor error covariance is minimized to find the solution to a numerical forecast model that best fits a series of datasets from observations distributed in space over a finite time interval. Another family of data assimilation methods is ensemble Kalman filters. They are reduced-rank Kalman filters based on sample error covariance matrices, an approach that avoids the integration of a full size covariance matrix, which is impossible even for today’s most powerful supercomputers. In contrast to interpolation methods used in the early days, 4D-Var and ensemble Kalman filters are iterative methods that can be applied to much larger problems. Yet the effort of solving problems of even larger size is far from over. Current day-to-day forecasting uses global models of grid resolutions between 16 – 50 km and about 2 – 20 km for short period local forecasting. Developing efficient and accurate algorithms of data assimilation for higher resolution models is a long-term challenge that will face mathematicians and meteorologists for many years to come.

**Wei Kang
Naval Postgraduate School
Monterey, California**

A major challenge in assessing the impacts of toxic chemicals on ecological systems is the development of predictive linkages between chemically-caused alterations at molecular and biochemical levels of organization and adverse outcomes on ecological systems.

In April, the National Institute for Mathematical and Biological Synthesis (NIMBioS) will host an Investigative Workshop on “Predictive Models for ERA.” The workshop will bring together a multidisciplinary group of molecular and cell biologists, physiologists, ecologists, mathematicians, computational biologists, and statisticians to explore the challenges and opportunities for developing and implementing models that are specifically designed to mechanistically link between levels of biological organization in a way that can inform ecological risk assessment and ultimately environmental policy and management. The focus will be on predictive systems models in which properties at higher levels of organization emerge from the dynamics of processes occurring at lower levels of organization.

Specific goals are to (1) identify advantages and limitations of various predictive systems models to connect chemically caused changes in organismic and suborganismic processes with outcomes at higher levels of organization that are relevant for environmental management; (2) identify the criteria that models of this kind have to fulfill in order to be useful for informing ecological risk assessment and management; and (3) propose a series of recommendations for further action.

Co-organizing the workshop are Valery Forbes, professor and director of the School of Biological Sciences at University of Nebraska, Lincoln and Richard Rebarber, professor of mathematics, also at UNL.

If you have an interest in these topics, the workshop is still accepting applications. The application deadline is Jan. 20, 2014. Individuals with a strong interest in the topic, including post-docs and graduate students, are encouraged to apply. **Click here** for more information and on-line registration.

NIMBioS Investigative Workshops focus on broad topics or a set of related topics, summarizing/synthesizing the state of the art and identifying future directions. Organizers and key invited researchers make up approximately one half the 30-40 participants in a workshop, and the remaining 15-20 participants are filled through open application from the scientific community. If needed, NIMBioS can provide support (travel, meals, lodging) for Workshop attendees.

]]>Coronary heart disease accounts for 18% of deaths in the United States every year. The disease results from a blockage of one or more arteries that supply blood to the heart muscle. This occurs as a result of a complex inflammatory condition called artherosclerosis, which leads to progressive buildup of fatty plaque near the surface of the arterial wall.

In a paper published last month in the *SIAM Journal on Applied Mathematics*, authors Sean McGinty, Sean McKee, Roger Wadsworth, and Christopher McCormick devise a mathematical model to improve currently-employed treatments of coronary heart disease (CHD).

“CHD remains the leading global cause of death, and mathematical modeling has a crucial role to play in the development of practical and effective treatments for this disease,” says lead author Sean McGinty. “The use of mathematics allows often highly complex biological processes and treatment responses to be simplified and written in terms of equations which describe the key parameters of the system. The solution of these equations invariably provides invaluable insight and understanding that will be crucial to the development of better treatments for patients in the future.”

The accumulation of plaque during CHD can result in chest pain, and ultimately, rupture of the artherosclerotic plaque, which causes blood clots blocking the artery and leading to heart attacks. A common method of treatment involves inserting a small metallic cage called a stent into the occluded artery to maintain blood flow.

However, upon insertion of a stent, the endothelium—the thin layer of cells that lines the inner surface of the artery—can be severely damaged. The inflammatory response triggered as a result of this damage leads to excessive proliferation and migration of smooth muscle cells (cells in the arterial wall that are involved in physiology and pathology) leading to re-blocking of the artery. This is an important limitation in the use of stents. One way to combat this has been the use of stents that release drugs to inhibit the smooth muscle cell proliferation, which causes the occlusion. However, these drug-eluting stents have been associated with incomplete healing of the artery. Studies are now being conducted to improve their performance.

“Historically, stent manufacturers have predominantly used empirical methods to design their drug-eluting stents. Those stents which show promising results in laboratory and clinical trials are retained and those that do not are discarded,” explains McGinty. “However, a natural question to ask is, what is the optimal design of a drug-eluting stent?”

The design of drug-eluting stents is severely limited by lack of understanding of the factors governing their drug release and distribution. “How much drug should be coated on the stent? What type of drug should be used?” McGinty questions. “All of these issues, of course, are inter-related. By developing models of drug release and the subsequent uptake into arterial tissue for current drug-eluting stents, and comparing the model solution with experimental results, we can begin to answer these questions.”

The model proposed by the authors considers a stent coated with a thin layer of polymer containing a drug, which is embedded in the arterial wall, and a porous region of smooth muscle cells embedded in an extracellular matrix.

When the polymer region and the tissue region are considered as a coupled system, it can be shown under certain conditions that the drug release concentration satisfies a special kind of integral equation called the Volterra integral equation, which can be solved numerically. The drug concentration in the system is determined from the solution of this integral equation. This gives the mass of drug within cells, which is of primary interest to clinicians.

The simple one-dimensional model proposed in the paper provides analytical solutions to this complex problem. “While the simplified one and two-dimensional models that our group and others have recently developed have provided qualitative results and useful insights into this problem, ultimately three-dimensional models which capture the full complex geometry of the stent and the arterial wall may be required,” McGinty says.

In a complex environment with pulsating blood flow, wound healing, cell proliferation and migration, and drug uptake and binding, the process of drug release from the stent may involve a multitude of factors, which could be best understood by three-dimensional models. “This is especially relevant when we want to consider the drug distribution in diseased arteries and when assessing the performance of the latest stents within complex geometries, where for instance, the diseased artery may bifurcate,” says McGinty. “We are therefore currently investigating the potential benefits of moving to three-dimensional models.”

*Source article*:

Sean McGinty, Sean McKee, Roger M. Wadsworth, and Christopher McCormick,

*Modeling Arterial Wall Drug Concentrations Following the Insertion of a Drug-Eluting Stent*,

SIAM Journal on Applied Mathematics, 73(6), 2004–2028. (Online publish date: November 12, 2013).

The source article is available for free access at the link above March 9, 2014.

A continuation of the workshop “Few-Body Dynamics in Atoms, Molecules, and Planetary Systems,” held at the Max Planck Institute for the Physics of Complex Systems in Dresden, Germany, June 28-July 1, 2010, CEMAD-2013 aimed to bring together experts in celestial mechanics and semi-classical theory, as applied to the study of atoms and molecules, for the benefit of all those involved. The main emphasis was on the mathematical aspects of these research fields. The event was a satellite meeting of the Mathematical Congress of the Americas (MCA-2013) and part of *Mathematics of the Planet Earth* (MPE-2013).

One-hour invited lectures were given by (in alphabetical order):

- Paula Balseiro, UFF – Rio de Janeiro, Brazil
- Luis Benet, Mexico City, Mexico,
- Stefanella Boatto, UFRJ – Rio de Janeiro, Brazil
- Alessandra Celletti, Rome, Italy
- Nark Nyul Choi, Kumih, South Korea
- Holger Dullin, Sydney, Australia
- Andreas Knauf, Erlangen, Germany
- Jacques Laskar, Paris, France
- Javier Madronero, Munich, Germany
- Jesús Palacian, Pamplona, Spain
- Thomas Pfeifer, Heidelberg, Germany
- Gareth Roberts, Worcester, USA
- Manuele Santoprete, Waterloo, Canada
- Cristina Stoica, Waterloo, Canada
- Susanna Terracini, Torino, Italy
- Turgay Uzer, Atlanta, USA
- Patricia Yanguas, Pamplona, Spain
- Shiqing Zhang, Chengdu, China

Several contributed talks were given during the five days of the meeting. The pleasant atmosphere led to many interesting discussions. The most notable ones were about:

- connecting the Wannier ridge, which is the single central configuration in helium, with the latest developments in the theory of central configurations;
- finding the connection between atoms with more than three electrons and various central configurations that occur for the Coulomb potential;
- generalizing the eZe configurations of the isosceles problem;
- using the recent achievements in KAM theory to put into the evidence some new periodic orbits in atomic and molecular systems;
- using the formalism of geometric mechanics to explore the symplectic structure of the equations that describe molecular dynamics;
- exploring the use of symbolic dynamics and borrow from each other’s experience, for the benefits of all groups involved;
- using new ways to apply McGehee coordinates in the study of motion near total collisions;
- finding and investigating new symmetries, both in flat and curved space; and
- exploring the latest quantum experiments from various mathematical points of view.

The participants deemed the meeting a great success and many of them reinforced the idea that workshops on this topic should continue in the future.

Florin Diacu, University of Victoria

]]>