Balachandran Physics

0 views
Skip to first unread message

Mark Tracy

unread,
Aug 3, 2024, 5:53:05 PM8/3/24
to difounlige

Aiyalam Parameswaran Balachandran (born 25 January 1938) is an Indian theoretical physicist known for his extensive contributions to the role of classical topology in quantum physics. He is currently an emeritus professor in the Department of Physics, Syracuse University,[1] where he was previously the Joel Dorman Steele Professor of Physics between 1999 and 2012.[2][3] He has also been a fellow of the American Physical Society since 1988 and was awarded a prize by the U.S. Chapter of the Indian Physics Association in recognition of his outstanding scientific contributions.[4]

Balachandran was born on 25 January 1938 in Salem, Tamil Nadu, India. His father, Aiyalam Sundaram Parameswaran, was a chartered accountant in Pierce Leslie and Company in Cochin. Balachandran had a gifted poet, Vyloppilli Sreedhara Menon, as his teacher. Balachandran completed his first two college years in Guruvayurappan College, Kozhikode, specialising in physics, chemistry and mathematics and passing the 'Intermediate Examination' with all-State distinction in 1953. He joined BSc (Hons) in Physics in the Madras Christian College, Tambaram, Chennai. Balachandran graduated from MCC in 1958.

Balachandran received his PhD degree under Professor Alladi Ramakrishnan at the University of Madras.[5] Then he joined Theoretisch Physics, University at Wien as a postdoctoral fellow under Professor Walter Thirring, subsequently at the Enrico Fermi Institute as a postdoc. In 1964, he joined the Syracuse University faculty. Balachandran's key scientific works to date include the revival of the Skyrme model which successfully describes baryons as topological solitons of meson fields and mathematical concepts such as homotopy groups and fibre bundles to problems in quantum physics. In recent, Balachandran's research has been focused on the formulation of quantum field theories on noncommutative spacetimes and investigating the emergent significance of Hopf algebras in quantum physics as generalisations of symmetry groups.

Interns clockwise, from left: Miles Kim, a SULI student; Erron Williams, an undergraduate engineering intern; and Lasya Balachandran, a high school intern. (Photos by Jeanne Jackson DeVoe and courtesy of Erron Williams and Lasya Balachandran/Collage by Kiran Sundarsanan)

All the students attended the fusion and plasma workshop June 14 to 25 that kicked off the Science Undergraduate Laboratory Internships (SULI) before embarking on research projects that will be presented online at the American Physical Society Division of Plasma Physics Conference Nov. 8 to 12.

Lasya Balachandran, an incoming freshman at the Massachusetts Institute of Technology who is majoring in electrical engineering and computer science, said the high school internship at PPPL gave her an opportunity to learn more about the field of plasma physics.

Balachandran developed an interest in math and science early on. She attended coding and robotics camps in middle school and went to High Technology High School in Lincroft, New Jersey, an engineering-focused high school, where she conducted research and participated in STEM clubs and competitions.

Balachandran worked from home in Marlboro, New Jersey, under the guidance of PPPL physicist George Wilkie, on simulating the reflection of plasma in doughnut-shaped tokamaks, a type of fusion energy device. In her research, Balachandran simulated using the liquid metal lithium to line the inner walls of the tokamak. She studied how the plasma interacted with the lithium material. She used different settings to determine the probability that particles of deuterium, another form of hydrogen, would reflect off the lithium and examined the average energy of the reflected particles.

Researchers are trying to determine the best material for use in a fusion reactor, Wilkie said. He noted that liquid lithium is one of the options that researchers are considering for plasma-facing components.

Kim spent the summer at his home in Collegedale, Tennessee, near Chattanooga, working on a research project with physicist Sam Cohen on the Princeton Field Reversed Configuration (PFRC) device fusion-reactor concept whose defining feature is the use a novel method of radio frequency power to drive plasma current and heat the plasma.

Williams spent the summer working from his home in Houston on a computer code that could help predict the ideal configuration for a fusion device using high temperature superconducting magnets. The code was aimed at predicting the stress on individual coils through computer modeling for potential use in a spherical tokamak power plant.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

One of the main challenges in materials discovery is efficiently exploring the vast search space for targeted properties as approaches that rely on trial-and-error are impractical. We review how methods from the information sciences enable us to accelerate the search and discovery of new materials. In particular, active learning allows us to effectively navigate the search space iteratively to identify promising candidates for guiding experiments and computations. The approach relies on the use of uncertainties and making predictions from a surrogate model together with a utility function that prioritizes the decision making process on unexplored data. We discuss several utility functions and demonstrate their use in materials science applications, impacting both experimental and computational research. We summarize by indicating generalizations to multiple properties and multifidelity data, and identify challenges, future directions and opportunities in the emerging field of materials informatics.

The trajectory for the discovery of new materials with increasing complexity as a function of time as we accelerate the process from trial and error to using high throughput calculations and statistical design methods to improve our ability to learn from existing data and make decisions for the next materials to test (TARGET)

The materials challenge in its full generality encompasses a very high-dimensional discovery or search space with millions of possible compounds of which only a vary small fraction have been experimentally explored. The space spans aspects of chemistry, crystal structure, processing conditions, microstructure, and the compounds can be multicomponent, for example, solid solutions and the properties can be dependent on materials descriptors or features at several length scales. Most efforts using first principles codes have largely focused on establishing a library of crystal structures and chemistries relevant to the problem, defining the training space in terms of samples and features, which can be elemental properties (e.g., electronegativity, Mendeleev number etc.), bond angles, bond lengths, energetics from first principles calculations, and aspects of thermodynamics from experiments or codes such as Calphad, to down-select promising candidates for experiments or further studies.15 A number of studies have also used inference models, such as off-the-shelf regression and classification learning tools, which can include deep neural networks for large datasets of microstructures and high-throughput computational databases, to make predictions. The approach has been employed to suggest new Heusler alloys,16 polymers,17 and thermoelectrics18 to name a few. In the case of Heusler alloys, the predicted compounds have also been synthesized. In spite of a significant amount of work using this approach, few examples exist where the final properties exceed those of the best compounds in the training data sets. There are now a number of articles and reviews19,20,21,22,23,24,25,26 emphasizing the merits of using machine learning and statistical inference to make predictions in a large combinatorial space. However, few focus on aspects of making the optimal next decisions for synthesis and characterization by experiments or calculations.27,28,29,30,31,32,33,34,35,36 The predictions from machine learning are not necessarily optimal. Aspects related to multiscale modeling and constitutive response at the engineering design scale are discussed in the book by McDowell et al.37

In contrast, the approaches we will discuss are based on an active learning or adaptive design paradigm, whereby the predictions from a surrogate model, which can be an inference model or a physics-based or reduced order model (ROM), serve as the input to define a utility or acquisition function, the optimal of which dictates the next experiment or calculation to be performed.38,39 This is the optimal experimental design component, the key decision making aspect of the active learning loop of Fig. 2. The results of the experiment or computation then augment the training data and the loop continues until the material requirements for the desired targets are met. The approach is unique in that most of the work done in the field essentially involves one or two of these steps. Taking data, building a model, making predictions and validating them with calculations or experiments. We are not aware of studies in which new materials have been found even via the inner feedback loop (green) of Fig. 2, let alone incorporating uncertainties and performing experimental design. However, a Bayesian and decision-theoretic approach discussed in this review naturally lends itself to adaptive sampling and active learning.40 The input to the decision making can come from predictions from any inference, surrogate or machine learned model. One first defines the utility of an experiment or calculation and then, taking into account uncertainties in both the parameter values (if any) and the observations or objectives, chooses experiments or calculations by maximizing an expected utility. The utilities are defined according to information theoretic considerations given the desired goals. The approach can also be used for model selection, that is, the design of experiments using maximum information criteria to distinguish between models. For any experimental design procedure, we should have a notion of the value of the information (or cost of uncertainty) that is gained (or reduced) by observing a specific data point. Then, the possible alternatives to observe (experiments) can be ranked by the expected value of information they provide so that we can prioritize experiments based on this.

c80f0f1006
Reply all
Reply to author
Forward
0 new messages