Feynman diagrams revolutionized particle physics. Now mathematicians want to do the same for vector calculus.

Back in 1948, the journal Physical Review published a paper entitled “Space-Time Approach to Quantum Electrodynamics” by a young physicist named R.P. Feynman at Cornell University. The paper described a new way to solve problems in electrodynamics using matrices. However, it is remembered today for a much more powerful invention — the Feynman diagram, which appeared there in print for the first time. Feynman diagrams have had a huge impact in physics. They are pictorial representations of the mathematics that describe the interaction between subatomic particles. Mathematically, each interaction is an infinite series, so even simple interactions between particles are fantastically complex to write down in this way.

Feynman’s genius was to represent these series with simple lines in a graphical format, allowing scientists to think about particle physics in new and exciting ways.

Feynman and others immediately began to extend their ideas using this graphical shorthand. Indeed, the American physicist Frank Wilcjek, who worked with Feynman in the 1980s, once wrote: “The calculations that eventually got me a Nobel Prize in 2004 would have been literally unthinkable without Feynman diagrams.“

Of course, many other areas of physics rely on complex mathematics. And that raises the interesting question of whether graphics-based innovations could simplify these calculations and perhaps kick-start a new era of innovation, just as Feynman did.

Enter Joon-Hwi Kim at Seoul National University in South Korea and a couple of colleagues who have come up with a similar innovation for vector calculus — a graphics-based shorthand for one of the most common and powerful mathematical tools in science. “We anticipate that graphical vector calculus will lower the barriers in learning and practicing vector calculus, as Feynman diagrams did in quantum field theory,” they say.

First some background. Vector calculus is the branch of mathematics that deals with the differentiation and integration of vector fields. The reason it is so important in physics is that more or less everything in the universe can be described in terms of vector fields — electromagnetic fields, gravitational fields, fluid flow, and so on.

That’s why every physics and engineering undergraduate spends many happy hours struggling with the mathematics and the arcane notation that it requires. The problem is that vector fields are intricate entities — they assign a single vector to every point in three-dimensional space and can themselves be representations of more complex mathematical objects called differentiable manifolds. So at its very simplest, a vector field can be an infinite list of vectors.

Mathematicians represent these fields using an approach called index notation. A vector can be written as *ai* where *i* = 1, 2, or 3 in three-dimensional space. Another way to write this is: = [*a*1, *a*2, *a*3].

The problems arise when these quantities interact mathematically. Vector fields can be multiplied by scalars or by each other in two different ways, known as a dot product and a cross product. And the results can be fantastically complex — huge, multidimensional matrices.

In all these cases, the indices of the vector fields involved must be tracked carefully. Any physicist will know how easy is to lose an index, and the pain involved in finding it again

Then there is the challenge of working out how these fields change over time, or in relation to some other variable. This is the problem of differentiation, for which physicists have developed a range of tools known as operators — perhaps the most famous being the __del operator__.

The advance that Kim and colleagues have made is to develop a graphics-based notation that replaces the index notation. They represent a vector as a box with a line attached to it. By contrast, a scalar has no lines extending from it.

When two vectors multiply together via a dot product, the result is scalar quantity. Kim and co’s notation takes care of this automatically. In a dot product, the lines associated with the two vectors connect to each other, creating an object with no external lines — in other words, a scalar.

But a cross product between two vectors produces another vector, and again Kim and co’s notation handles this automatically. The graphic for a cross product is y-shaped, with the lines from the two vectors connecting to a third that extends away. In other words, this forms a vector.

This is just the beginning. The researchers go on to describe a wide range of other mathematical tools, such as the del operator along with various important identities used in vector calculus. And they extend their ideas to tensors, which are more complex mathematical objects, each with two or more indices.

The results show a remarkable economy. Kim and co show how their notation turns complex mathematical expressions into relatively simple graphics, just like Feynman diagrams. “The language is highly intuitive and automatically simplifies tensorial expressions,” they say.

There is significant utility here. Kim and co say their approach changes vector field calculus into a visual task, rather like building with Lego bricks. “As a child playing with educational toys such as Lego blocks or magnetic building sticks, it will be an entertaining experience to ‘doodle with the dancing diagrams,’ they say. “As Feynman diagrams are the most natural language to describe the microscopic process of elementary particles, the graphical notation is the canonical language of the vector calculus system.”

That’s a grand claim with huge potential. There is no question that Feynman diagrams have changed the way physicists think about particle physics. But vector calculus has an even greater reach as the mathematical foundation of much of modern physics and engineering.

**Source**: MIT Technology Review

The Tech Platform

## Comments