Navier Stokes in Chapel - 2D Solutions & Performance

It took me a long while to figure out what a source condition was? Where did you get that wording? Is that a pressure term? I found myself cross referencing the Python tutorial a lot and even that is a bit lacking in explanation. Is that intentional because it made it hard to read.

How do you choose nx and ny given then you want a mesh of 30*30 points?

Also, y = 0 and y = 2? Are these the left and right edge of the 2D region in question?

Thanks - Damian

Overall, an impressive set of 4 articles

Hi Damian,

Are you possibly referring to this blog page in the tutorial series?

For our purposes, Poisson’s Equation can be written as follows, where "p" is a scalar quantity representing a fluid’s pressure as a function of space, and "b" is a scalar source term, which in the incompressible Navier-Stokes context can be derived from the fluid’s momentum:

$ \nabla^2 p = b $

...

Source and Boundary Conditions

From Step 10 of the tutorial, we’ll use the following value for "b" that should create a peak and valley in our solution around the two specified points:
...

Though I have no experience with Navier-Stokes equations, I guess the word "source" above refers to the right-hand side ("b") of the Poisson like equation (nabla p = b) and the "source condition" probably means the setting / definition of the b term. I guess the article is trying to make the tutorial as minimal and simple as possible (by not including too much detail about realistic models), not trying to make it harder to read :wink: (But, if some sentences or example codes are unclear in some parts, I also think it may be useful to update those parts (if necessary).)

By the way, I haven't run the tutorial codes yet and would like to try them in the winter holidays :santa: It also seems interesting to make comparison with C++ or Fortran etc.

(PS: I tried to type a math equation with $...$, but it does not seem to support Tex (or MathJax) at the moment...)

Best regards and have a nice weekend, -- Takeshi

Thanks for the reply. I did mean that Blog which was mentioned in the November newsletter. I went back to really basic mathematics text and the word source comes from there. I missed the words about the term being related to momentum. I found that the blog's structure frustrating as I was going back and forth between Blog articles and the original Python tutorials. That is a recipe for missing things.

BTW, there is some issue with the latest Discourse blocking mathjax.org by default. There is a suggestion to add https://cdn.mathjax.org to the cors origins Site Setting. No idea what that is about. Needs a Discourse guru.

Thanks again. Damian

If one assigns some StencilDist array say, x, to another y

y = x

Does that copy the fluff?

In the computations in nsStep11.chpl,

for 1..nit
{
            p <=> pn;
            poissonKernel(p, pn, b);
            neumannBC(p);
            p.updateFluff();
}

Why is p.updateFluff() called when it is rather than pn.updateFluff() called as it was just prior to the call to poissonKernel() as it was in nsPoissonStencil.chpl? Just curious? And just, I know you need to do a p.updateFluff() after that loop just prior to computeU().

In nsPosssonStencil.chpl, and even nsPoisson.chpl the convergence criteria looks odd

 deltaL1Norm = (+ reduce (abs(p) - abs(pn))) / (+ reduce abs(pn));

The blog says that "We can either iterate until the difference in p from step to step is very small". That difference is p - pn. Using a convergence criteria based in 1-Norms, the value of the variable deltaL1Norm would be

 deltaL1Norm = (+ reduce (abs(p - pn))) / (+ reduce abs(pn));

This is the Cauchy convergence criterion for a sequence of vectors.

The difference of the individual 1-norms as used in Barba's original Python and replicated in the Chapel code is, I would suggest, not a true convergence measure and I would suggest it be avoided. My 2c. Certainly not Cauchy's criterion.

For the record

$ time ./nsStep11 --nx=99 --ny=99
Ran simulation in 2.30527 seconds
mean pressure: -0.01934
./nsStep11 --nx=99 --ny=99  55.23s user 0.06s system 2359% cpu 2.344 total
$ time ./nsStep11 --nx=101 --ny=101
Ran simulation in 2.31869 seconds
mean pressure: nan
./nsStep11 --nx=101 --ny=101  55.57s user 0.05s system 2360% cpu 2.356 total

Not sure what the problem is but a NaN is not a good thing to have.

It is the algorithm which has the issues. No problems with Chapel itself.

Hi @damianmoz, thanks for the questions! Hopefully this helps:

It took me a long while to figure out what a source condition was?

@tbzy's answer is spot on:

I guess the word "source" above refers to the right-hand side ("b") of the Poisson like equation (nabla p = b) and the "source condition" probably means the setting / definition of the b term.

I think I originally learned this term in the context of computational electromagnetics (where a "source term" would be something like an incoming electric field exciting an antenna).

How do you choose nx and ny given then you want a mesh of 30*30 points?

It should just be --nx=30 --ny=30

Also, y = 0 and y = 2? Are these the left and right edge of the 2D region in question?

Yes, by default the domain is 2 by 2 (meters). Its size can be modified by changing the xLen and yLen variables.

If one assigns some StencilDist array say, x , to another y ... Does that copy the fluff ?

Yes, that's correct, the fluff buffers should be copied along with the array itself.

Why is p.updateFluff() called when it is rather than pn.updateFluff() called as it was just prior to the call to poissonKernel() as it was in nsPoissonStencil.chpl?

Either pattern achieves essentially the same effect. The benefit of this arrangement is that after the loop, the solution array, p, has updated fluff buffers. With the alternative pattern, you'd want to call p.updateFluff() immediately after the for-loop to achieve the same effect.

This is the Cauchy convergence criterion for a sequence of vectors.
The difference of the individual 1-norms as used in Barba's original Python and replicated in the Chapel code is, I would suggest, not a true convergence measure...

Yeah, that's true, it's sort of a strange convergence criterion, and I'm not 100% sure why it was used in the original tutorials, but I went ahead and copied what they used to keep things consistent. It would definitely be interesting to try the condition you've suggested to see if it achieves noticeably different results.

Not sure what the problem is but a NaN is not a good thing to have.
It is the algorithm which has the issues. No problems with Chapel itself.

That's probably a case where the algorithm has become numerically unstable. I'm guessing it's because the ratio of finite difference points to unit length is too large, and thus the "derivatives" become too large to be numerically stable. You should be able to get around this by also increasing xLen and yLen.

Best,
Jeremiah

1 Like

The algorithm should be independent of xLen and yLen.

Beyond a 2D mesh of 99x99, it fails. A finite element program should work better the more you refine the mesh. In this case, the more you refine the mesh, the faster the algorithm breaks down. So the algorithm has fundamental issues. Scary.

P.S. I agree with your use of the same termination condition as the original Python. Consistency is the name of the game. That said, the termination condition used in the original Python is more than strange, I think it is mathematically flawed. My advice to anybody using the algorithm is that they should use the Cauchy convergence criterion. It has a sound (rock solid?) mathematical basis. Cauchy himself was a French mathematician, engineer, and physicist. He was one of the first to rigorously state and prove the key theorems of calculus and he contributed a lot to the field of continuum mechanics amongst a whole lot of other things.