• Skip to main content
  • Skip to search
  • Skip to footer
Cadence Home
  • This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

  1. Blogs
  2. Breakfast Bytes
  3. What Is a Lagrange Point in Space? And DALL·E 2
Paul McLellan
Paul McLellan

Community Member

Blog Activity
Options
  • Subscribe by email
  • More
  • Cancel
lagrange points
dall-e 2
space
openai

What Is a Lagrange Point in Space? And DALL·E 2

15 Apr 2022 • 6 minute read

 breakfast bytes logoMonday is another Cadence recharge day, and Breakfast Bytes will not appear. It is almost like being back in England again, where Easter Monday is always a public holiday. On the day before a break, sometimes I go way off topic to things like recipes or optical illusions. Today, only semi-offtopic. A look at Lagrange points in space and a look at the recently announced incredible DALL·E 2 system that you give a description of an image you want, and it produces it. Make sure to take a look, it is extraordinary.

Lagrange Points

You might have read that the James Webb Space Telescope is positioned at Lagrange point L2. So, what is a Lagrange point?

3-body problemThere is a thing in dynamics called the n-body problem, and a special case is the three-body problem. The 2-body problem, say the Earth and the Moon, is analytically solvable. But the 3-body problem is chaotic and so it is not possible to predict anything in the general case. You can use numerical methods but, in the worst case, you get the "butterfly effect" and no matter how accurate your computation it is not enough.

This chaotic effect was first noticed (and named) by Edward Lorenz. As reported in APS Physics:

One day in the winter of 1961, Lorenz wanted to examine one particular sequence at greater length, but he took a shortcut. Instead of starting the whole run over, he started midway through, typing the numbers straight from the earlier printout to give the machine its initial conditions. Then he walked down the hall for a cup of coffee, and when he returned an hour later, he found an unexpected result. Instead of exactly duplicating the earlier run, the new printout showed the virtual weather diverging so rapidly from the previous pattern that, within just a few virtual "months", all resemblance between the two had disappeared.
...
Lorenz subsequently dubbed his discovery "the butterfly effect": the nonlinear equations that govern the weather have such an incredible sensitivity to initial conditions, that a butterfly flapping its wings in Brazil could set off a tornado in Texas. And he concluded that long-range weather forecasting was doomed.

A special case of the three-body problem is where two of the bodies are much more massive than the third. Such as Earth, the moon, and the Apollo 11 spacecraft. Or Earth, the Sun, and the James Webb Telescope. In this case, there are five stable points, known as Lagrange Points, where the smaller mass can hide away. These points are rarely truly stable, like a marble in the bottom of a bowl. Usually, they are unstable, more like balancing a pencil on its point. In theory, it should stay there forever, but we all know that it won't. But it is also the case that it takes very little force to keep the pencil upright if you don't let it get too far off vertical.

Lagrange points are like that. If the James Webb telescope was at some random point in space, it would take a lot of fuel to keep it there, needing to run its rockets almost continuously. But at a Lagrange point, it takes very little. You can think of the Lagrange points as being "almost stable".

NASA has a good explanation What Is a Lagrange Point? with this diagram showing where they are.

L1 is between Earth and the Sun. L2, where the James Webb Telescope is parked, is on the other side of the Earth from the Sun. When you first look at this diagram, it seems obvious that L1 would be reasonably stable, but L2? You have to remember that this is not a static diagram because Earth is going around the Sun (once per sidereal year). So not just Earth, but all the Lagrange points are rotating about the Sun too (in this model, the Sun is considered to be fixed although if you zoom out, the Sun is actually in motion through the Milky Way, and the Milky Way is moving through the Universe). So L2 is stable when it is positioned just so that the gravitational attraction of the Sun plus the Earth matches the centrifugal effect on the Telescope. In fact, that is the case for all the points: the gravitational effect of the Sun and Earth balance out with the centrifugal effect on the object.

The first three Lagrange points were discovered by Leonard Euler in 1750. A decade later, Lagrange discovered L4 and L5. Under some ratio of masses, L4 and L5 are truly stable (like the marble in a bowl) and if an object is displaced it gets pulled back. As a result, in some cases of this, such as Jupiter and the Sun, there is cosmic dust and even small asteroids trapped there.

Here is the first sharp photo taken from the James Webb Telescope (mid-March). The star is 2,000 light-years away, so you are seeing what it looked like when Julius Caesar was in power in Rome. This star has the catchy name 2MASS J17554042+6551277!

DALL·E 2

A complete change of subject. Did you see the announcement of OpenAI's DALL·E 2. It takes a description in words of an image you want and produces such an image. Here "Teddy bears mixing sparkling chemical as mad scientists as a 1990s Saturday morning cartoon". This is entirely generated by the AI program with no human intervention.

If you go to the DALL·E 2 website, it is interactive. So, for instance, in the above example, you can click on "An astronaut" and the Teddybears get switched out for an astronaut. Go and play with it, it is completely amazing. I'm sure it is not actually running the model when you click on the choices, otherwise it would give you free rein to type anything you wanted, so it would be interesting to know how long they took to create. But Sam Altman, the CEO, was on Twitter letting people propose images. Then, when three or four people got on the same thread, he would put it into the program.

The system doesn't create the pictures the way you or I would. It starts with random pixels, then looks at them to see how well it matches the attributes needed and decides what to change based on the differences. It is not unlike the way that neural networks are trained to, say, recognize road signs.

I wonder how much success designing images like this is because we have such a huge amount of annotated image data (see my post ImageNet: The Benchmark that Changed Everything). You can find pictures of Teddy bears or astronauts easily. Trying to automatically "find an antibiotic that is effective against xxx" seems a lot harder, not just because the task seems intrinsically harder but also there is a lot less data out there to start from.

One thing that this makes you think about is captured by Sam Altman (OpenAI's CEO) in a blogpost:

 It’s a reminder that predictions about AI are very difficult to make. A decade ago, the conventional wisdom was that AI would first impact physical labor, and then cognitive labor, and then maybe someday it could do creative work. It now looks like it’s going to go in the opposite order.

I hope OpenAI doesn't target blog posts next or I'm out of a job!

Let's wrap up with "a painting of a fox sitting in a field at sunrise in the style of Claude Monet.”

Breakfast Bytes will be back on Tuesday with a product announcement.

Bonus Video

A San Francisco cop pulls over a driverless Cruise car because its lights were off.

 

Sign up for Sunday Brunch, the weekly Breakfast Bytes email.

.