OpenFPM_pdata  1.1.0
Project that contain the implementation of distributed structures
 All Data Structures Namespaces Functions Variables Typedefs Enumerations Friends Pages
Simple usage

Simple grid example

This example show several basic functionalities of the distributed grid

Video

Initialization

Here we:

  • Initialize the library
  • Create A 3D box that define our the domain
  • an array of 3 unsigned integer that will define the size of the grid on each dimensions
  • A Ghost object that will define the extension of the ghost part in physical units
// Initialize the library
openfpm_init(&argc,&argv);
// 3D physical domain
Box<3,float> domain({0.0,0.0,0.0},{1.0,1.0,1.0});
// Grid size on eaxh dimension
size_t sz[3] = {100,100,100};
// Ghost part

Grid instantiation

Here we are creating a distributed grid defined by the following parameters

  • 3 dimensionality of the grid
  • float type used for the spatial coordinates
  • each grid point contain a vector of dimension 3 (float[3]),
  • float[3] is the information stored by each grid point a float[3] the list of properties must be put into an aggregate data structure aggregate<prop1,prop2,prop3, ... >

Constructor parameters:

  • sz: size of the grid on each dimension
  • domain: where the grid is defined
  • g: ghost extension

Loop over grid points

Get an iterator that go through all the grid points. In this example we use iterators. Iterators are convenient way to explore/iterate data-structures.

// Get the iterator (No ghost)
auto dom = g_dist.getDomainIterator();
// Counter
size_t count = 0;
// Iterate over all the grid points
while (dom.isNext())
{
// next point
++dom;
}

Grid coordinates

Get the local grid key, one local grid key* identify one point in the grid and store the local grid coordinates of such point

(*)Internally a local grid store the sub-domain id (each sub-domain contain a grid) and the local grid point id identified by 2 integers in 2D 3 integer in 3D and so on. These two distinct elements are available with key.getSub() and key.getKey().

// local grid key from iterator
auto key = dom.get();

Short explanation

In oder to get the real/global coordinates of the grid point we have to convert the object key with getGKey

Long Explanation
auto key_g = g_dist.getGKey(key);

Assign properties

Each grid point has a vector property we write on the vector coordinates the global coordinate of the grid point. At the same time we also count the points

g_dist.template get<0>(key)[0] = key_g.get(0);
g_dist.template get<0>(key)[1] = key_g.get(1);
g_dist.template get<0>(key)[2] = key_g.get(2);
// Count the points
count++;

Each sub-domain has an extended part, that is materially contained in another processor. The function ghost_get guarantee (after return) that this extended part is perfectly synchronized with the other processor.

g_dist.template ghost_get<0>();

count contain the number of points the local processor contain, if we are interested to count the total number across the processor we can use the function sum, to sum numbers across processors. First we have to get an instance of Vcluster, queue an operation of sum with the variable count and finally execute. All the operation are asynchronous, execute work like a barrier and ensure that all the queued operations are executed

// Get the VCluster object
Vcluster & vcl = create_vcluster();
// queue an operation of sum for the counter count
vcl.sum(count);
// execute the operation
vcl.execute();
// only master output
if (vcl.getProcessUnitID() == 0)
std::cout << "Number of points: " << count << "\n";

VTK and visualization

Finally we want a nice output to visualize the information stored by the distributed grid. The function write by default produce VTK files. One for each processor that can be visualized with the programs like paraview

g_dist.write("output");

Decomposition

For debugging purpose and demonstration we also output the decomposition of the space across processor. This function produce VTK files that can be visualized with Paraview

g_dist.getDecomposition().write("out_dec");

Here we see the decomposition in 3D for 2 processors. The red box in wire-frame is the processor 0 subdomain. The blu one is the processor 1 sub-domain. The red solid box is the extended part for processor 0 the blu solid part is the extended part for processor 1

Finalize

At the very end of the program we have always to de-initialize the library

openfpm_finalize();

Full code

#include "Grid/grid_dist_id.hpp"
#include "data_type/aggregate.hpp"
int main(int argc, char* argv[])
{
// Initialize the library
openfpm_init(&argc,&argv);
// 3D physical domain
Box<3,float> domain({0.0,0.0,0.0},{1.0,1.0,1.0});
// Grid size on eaxh dimension
size_t sz[3] = {100,100,100};
// Ghost part
// Get the iterator (No ghost)
auto dom = g_dist.getDomainIterator();
// Counter
size_t count = 0;
// Iterate over all the grid points
while (dom.isNext())
{
// local grid key from iterator
auto key = dom.get();
auto key_g = g_dist.getGKey(key);
g_dist.template get<0>(key)[0] = key_g.get(0);
g_dist.template get<0>(key)[1] = key_g.get(1);
g_dist.template get<0>(key)[2] = key_g.get(2);
// Count the points
count++;
// next point
++dom;
}
g_dist.template ghost_get<0>();
// Get the VCluster object
Vcluster & vcl = create_vcluster();
// queue an operation of sum for the counter count
vcl.sum(count);
// execute the operation
vcl.execute();
// only master output
if (vcl.getProcessUnitID() == 0)
std::cout << "Number of points: " << count << "\n";
g_dist.write("output");
g_dist.getDecomposition().write("out_dec");
openfpm_finalize();
}