Implementation of VCluster class. More...
Implementation of VCluster class.
This class implement communication functions. Like summation, minimum and maximum across processors, or Dynamic Sparse Data Exchange (DSDE)
Definition at line 58 of file VCluster.hpp.
#include <VCluster.hpp>
Data Structures | |
struct | base_info |
Base info. More... | |
struct | index_gen |
struct | index_gen< index_tuple< prp... > > |
Process the receive buffer using the specified properties (meta-function) More... | |
struct | MetaFuncOrd |
metafunction More... | |
Public Member Functions | |
Vcluster (int *argc, char ***argv) | |
Constructor. | |
template<typename T , typename S , template< typename > class layout_base = memory_traits_lin> | |
bool | SGather (T &send, S &recv, size_t root) |
Semantic Gather, gather the data from all processors into one node. | |
template<typename T , typename S , template< typename > class layout_base = memory_traits_lin> | |
bool | SGather (T &send, S &recv, openfpm::vector< size_t > &prc, openfpm::vector< size_t > &sz, size_t root) |
Semantic Gather, gather the data from all processors into one node. | |
void | barrier () |
Just a call to mpi_barrier. | |
template<typename T , typename S , template< typename > class layout_base = memory_traits_lin> | |
bool | SScatter (T &send, S &recv, openfpm::vector< size_t > &prc, openfpm::vector< size_t > &sz, size_t root) |
Semantic Scatter, scatter the data from one processor to the other node. | |
void | reorder_buffer (openfpm::vector< size_t > &prc, const openfpm::vector< size_t > &tags, openfpm::vector< size_t > &sz_recv) |
reorder the receiving buffer | |
template<typename T , typename S , template< typename > class layout_base = memory_traits_lin> | |
bool | SSendRecv (openfpm::vector< T > &send, S &recv, openfpm::vector< size_t > &prc_send, openfpm::vector< size_t > &prc_recv, openfpm::vector< size_t > &sz_recv, size_t opt=NONE) |
Semantic Send and receive, send the data to processors and receive from the other processors. | |
template<typename T , typename S , template< typename > class layout_base = memory_traits_lin> | |
bool | SSendRecvAsync (openfpm::vector< T > &send, S &recv, openfpm::vector< size_t > &prc_send, openfpm::vector< size_t > &prc_recv, openfpm::vector< size_t > &sz_recv, size_t opt=NONE) |
Semantic Send and receive, send the data to processors and receive from the other processors asynchronous version. | |
template<typename T , typename S , template< typename > class layout_base, int ... prp> | |
bool | SSendRecvP (openfpm::vector< T > &send, S &recv, openfpm::vector< size_t > &prc_send, openfpm::vector< size_t > &prc_recv, openfpm::vector< size_t > &sz_recv, openfpm::vector< size_t > &sz_recv_byte_out, size_t opt=NONE) |
Semantic Send and receive, send the data to processors and receive from the other processors (with properties) | |
template<typename T , typename S , template< typename > class layout_base, int ... prp> | |
bool | SSendRecvPAsync (openfpm::vector< T > &send, S &recv, openfpm::vector< size_t > &prc_send, openfpm::vector< size_t > &prc_recv, openfpm::vector< size_t > &sz_recv, openfpm::vector< size_t > &sz_recv_byte_out, size_t opt=NONE) |
Semantic Send and receive, send the data to processors and receive from the other processors (with properties) asynchronous version. | |
template<typename T , typename S , template< typename > class layout_base, int ... prp> | |
bool | SSendRecvP (openfpm::vector< T > &send, S &recv, openfpm::vector< size_t > &prc_send, openfpm::vector< size_t > &prc_recv, openfpm::vector< size_t > &sz_recv, size_t opt=NONE) |
Semantic Send and receive, send the data to processors and receive from the other processors (with properties) | |
template<typename T , typename S , template< typename > class layout_base, int ... prp> | |
bool | SSendRecvPAsync (openfpm::vector< T > &send, S &recv, openfpm::vector< size_t > &prc_send, openfpm::vector< size_t > &prc_recv, openfpm::vector< size_t > &sz_recv, size_t opt=NONE) |
Semantic Send and receive, send the data to processors and receive from the other processors (with properties) asynchronous version. | |
template<typename op , typename T , typename S , template< typename > class layout_base, int ... prp> | |
bool | SSendRecvP_op (openfpm::vector< T > &send, S &recv, openfpm::vector< size_t > &prc_send, op &op_param, openfpm::vector< size_t > &prc_recv, openfpm::vector< size_t > &recv_sz, size_t opt=NONE) |
Semantic Send and receive, send the data to processors and receive from the other processors. | |
template<typename op , typename T , typename S , template< typename > class layout_base, int ... prp> | |
bool | SSendRecvP_opAsync (openfpm::vector< T > &send, S &recv, openfpm::vector< size_t > &prc_send, op &op_param, openfpm::vector< size_t > &prc_recv, openfpm::vector< size_t > &recv_sz, size_t opt=NONE) |
Semantic Send and receive, send the data to processors and receive from the other processors asynchronous version. | |
template<typename T , typename S , template< typename > class layout_base = memory_traits_lin> | |
bool | SSendRecvWait (openfpm::vector< T > &send, S &recv, openfpm::vector< size_t > &prc_send, openfpm::vector< size_t > &prc_recv, openfpm::vector< size_t > &sz_recv, size_t opt=NONE) |
Synchronize with SSendRecv. | |
template<typename T , typename S , template< typename > class layout_base, int ... prp> | |
bool | SSendRecvPWait (openfpm::vector< T > &send, S &recv, openfpm::vector< size_t > &prc_send, openfpm::vector< size_t > &prc_recv, openfpm::vector< size_t > &sz_recv, openfpm::vector< size_t > &sz_recv_byte_out, size_t opt=NONE) |
Synchronize with SSendRecvP. | |
template<typename T , typename S , template< typename > class layout_base, int ... prp> | |
bool | SSendRecvPWait (openfpm::vector< T > &send, S &recv, openfpm::vector< size_t > &prc_send, openfpm::vector< size_t > &prc_recv, openfpm::vector< size_t > &sz_recv, size_t opt=NONE) |
Synchronize with SSendRecvP. | |
template<typename op , typename T , typename S , template< typename > class layout_base, int ... prp> | |
bool | SSendRecvP_opWait (openfpm::vector< T > &send, S &recv, openfpm::vector< size_t > &prc_send, op &op_param, openfpm::vector< size_t > &prc_recv, openfpm::vector< size_t > &recv_sz, size_t opt=NONE) |
Synchronize with SSendRecvP_op. | |
Public Member Functions inherited from Vcluster_base< InternalMemory > | |
Vcluster_base (int *argc, char ***argv) | |
Virtual cluster constructor. | |
gpu::ofp_context_t & | getgpuContext (bool iw=true) |
If nvidia cuda is activated return a gpu context. | |
MPI_Comm | getMPIComm () |
Get the MPI_Communicator (or processor group) this VCluster is using. | |
size_t | getProcessingUnits () |
Get the total number of processors. | |
size_t | size () |
Get the total number of processors. | |
void | print_stats () |
void | clear_stats () |
size_t | getProcessUnitID () |
Get the process unit id. | |
size_t | rank () |
Get the process unit id. | |
template<typename T > | |
void | sum (T &num) |
Sum the numbers across all processors and get the result. | |
template<typename T > | |
void | max (T &num) |
Get the maximum number across all processors (or reduction with infinity norm) | |
template<typename T > | |
void | min (T &num) |
Get the minimum number across all processors (or reduction with insinity norm) | |
void | progressCommunication () |
In case of Asynchonous communications like sendrecvMultipleMessagesNBXAsync this function progress the communication. | |
template<typename T > | |
void | sendrecvMultipleMessagesNBX (openfpm::vector< size_t > &prc, openfpm::vector< T > &data, openfpm::vector< size_t > &prc_recv, openfpm::vector< size_t > &recv_sz, void *(*msg_alloc)(size_t, size_t, size_t, size_t, size_t, size_t, void *), void *ptr_arg, long int opt=NONE) |
Send and receive multiple messages. | |
template<typename T > | |
void | sendrecvMultipleMessagesNBXAsync (openfpm::vector< size_t > &prc, openfpm::vector< T > &data, openfpm::vector< size_t > &prc_recv, openfpm::vector< size_t > &recv_sz, void *(*msg_alloc)(size_t, size_t, size_t, size_t, size_t, size_t, void *), void *ptr_arg, long int opt=NONE) |
Send and receive multiple messages asynchronous version. | |
template<typename T > | |
void | sendrecvMultipleMessagesNBX (openfpm::vector< size_t > &prc, openfpm::vector< T > &data, void *(*msg_alloc)(size_t, size_t, size_t, size_t, size_t, size_t, void *), void *ptr_arg, long int opt=NONE) |
Send and receive multiple messages. | |
template<typename T > | |
void | sendrecvMultipleMessagesNBXAsync (openfpm::vector< size_t > &prc, openfpm::vector< T > &data, void *(*msg_alloc)(size_t, size_t, size_t, size_t, size_t, size_t, void *), void *ptr_arg, long int opt=NONE) |
Send and receive multiple messages asynchronous version. | |
void | sendrecvMultipleMessagesNBX (size_t n_send, size_t sz[], size_t prc[], void *ptr[], size_t n_recv, size_t prc_recv[], size_t sz_recv[], void *(*msg_alloc)(size_t, size_t, size_t, size_t, size_t, size_t, void *), void *ptr_arg, long int opt=NONE) |
Send and receive multiple messages. | |
void | sendrecvMultipleMessagesNBXAsync (size_t n_send, size_t sz[], size_t prc[], void *ptr[], size_t n_recv, size_t prc_recv[], size_t sz_recv[], void *(*msg_alloc)(size_t, size_t, size_t, size_t, size_t, size_t, void *), void *ptr_arg, long int opt=NONE) |
Send and receive multiple messages asynchronous version. | |
void | sendrecvMultipleMessagesNBX (size_t n_send, size_t sz[], size_t prc[], void *ptr[], size_t n_recv, size_t prc_recv[], void *(*msg_alloc)(size_t, size_t, size_t, size_t, size_t, size_t, void *), void *ptr_arg, long int opt=NONE) |
Send and receive multiple messages. | |
void | sendrecvMultipleMessagesNBXAsync (size_t n_send, size_t sz[], size_t prc[], void *ptr[], size_t n_recv, size_t prc_recv[], void *(*msg_alloc)(size_t, size_t, size_t, size_t, size_t, size_t, void *), void *ptr_arg, long int opt=NONE) |
Send and receive multiple messages asynchronous version. | |
void | sendrecvMultipleMessagesNBX (size_t n_send, size_t sz[], size_t prc[], void *ptr[], void *(*msg_alloc)(size_t, size_t, size_t, size_t, size_t, size_t, void *), void *ptr_arg, long int opt=NONE) |
Send and receive multiple messages. | |
void | sendrecvMultipleMessagesNBXAsync (size_t n_send, size_t sz[], size_t prc[], void *ptr[], void *(*msg_alloc)(size_t, size_t, size_t, size_t, size_t, size_t, void *), void *ptr_arg, long int opt=NONE) |
Send and receive multiple messages Asynchronous version. | |
void | sendrecvMultipleMessagesNBXWait () |
Send and receive multiple messages wait NBX communication to complete. | |
bool | send (size_t proc, size_t tag, const void *mem, size_t sz) |
Send data to a processor. | |
template<typename T , typename Mem , template< typename > class gr> | |
bool | send (size_t proc, size_t tag, openfpm::vector< T, Mem, gr > &v) |
Send data to a processor. | |
bool | recv (size_t proc, size_t tag, void *v, size_t sz) |
Recv data from a processor. | |
template<typename T , typename Mem , template< typename > class gr> | |
bool | recv (size_t proc, size_t tag, openfpm::vector< T, Mem, gr > &v) |
Recv data from a processor. | |
template<typename T , typename Mem , template< typename > class gr> | |
bool | allGather (T &send, openfpm::vector< T, Mem, gr > &v) |
Gather the data from all processors. | |
template<typename T , typename Mem , template< typename > class layout_base> | |
bool | Bcast (openfpm::vector< T, Mem, layout_base > &v, size_t root) |
Broadcast the data to all processors. | |
void | execute () |
Execute all the requests. | |
void | clear () |
Release the buffer used for communication. | |
Private Types | |
typedef Vcluster_base< InternalMemory > | self_base |
Private Member Functions | |
template<typename op , typename T , typename S , template< typename > class layout_base> | |
void | prepare_send_buffer (openfpm::vector< T > &send, S &recv, openfpm::vector< size_t > &prc_send, openfpm::vector< size_t > &prc_recv, openfpm::vector< size_t > &sz_recv, size_t opt) |
Prepare the send buffer and send the message to other processors. | |
void | reset_recv_buf () |
Reset the receive buffer. | |
template<typename op , typename T , typename S , template< typename > class layout_base, unsigned int ... prp> | |
void | process_receive_buffer_with_prp (S &recv, openfpm::vector< size_t > *sz, openfpm::vector< size_t > *sz_byte, op &op_param, size_t opt) |
Process the receive buffer. | |
Static Private Member Functions | |
static void * | msg_alloc (size_t msg_i, size_t total_msg, size_t total_p, size_t i, size_t ri, size_t tag, void *ptr) |
Call-back to allocate buffer to receive data. | |
static void * | msg_alloc_known (size_t msg_i, size_t total_msg, size_t total_p, size_t i, size_t ri, size_t tag, void *ptr) |
Call-back to allocate buffer to receive data. | |
Private Attributes | |
ExtPreAlloc< HeapMemory > * | mem [NQUEUE] |
openfpm::vector< size_t > | sz_recv_byte [NQUEUE] |
openfpm::vector< const void * > | send_buf |
openfpm::vector< size_t > | send_sz_byte |
openfpm::vector< size_t > | prc_send_ |
unsigned int | NBX_prc_scnt = 0 |
unsigned int | NBX_prc_pcnt = 0 |
HeapMemory * | pmem [NQUEUE] |
base_info< InternalMemory > | NBX_prc_bi [NQUEUE] |
Additional Inherited Members | |
Data Fields inherited from Vcluster_base< InternalMemory > | |
openfpm::vector< size_t > | sz_recv_tmp |
Protected Attributes inherited from Vcluster_base< InternalMemory > | |
openfpm::vector_fr< BMemory< InternalMemory > > | recv_buf [NQUEUE] |
Receive buffers. | |
openfpm::vector< size_t > | tags [NQUEUE] |
tags receiving | |
|
private |
Definition at line 123 of file VCluster.hpp.
|
inline |
Constructor.
argc | main number of arguments |
argv | main set of arguments |
Definition at line 418 of file VCluster.hpp.
|
inline |
Just a call to mpi_barrier.
Definition at line 589 of file VCluster.hpp.
|
inlinestaticprivate |
Call-back to allocate buffer to receive data.
msg_i | size required to receive the message from i |
total_msg | total size to receive from all the processors |
total_p | the total number of processor that want to communicate with you |
i | processor id |
ri | request id (it is an id that goes from 0 to total_p, and is unique every time message_alloc is called) |
ptr | a pointer to the vector_dist structure |
Definition at line 318 of file VCluster.hpp.
|
inlinestaticprivate |
Call-back to allocate buffer to receive data.
msg_i | size required to receive the message from i |
total_msg | total size to receive from all the processors |
total_p | the total number of processor that want to communicate with you |
i | processor id |
ri | request id (it is an id that goes from 0 to total_p, and is unique every time message_alloc is called) |
ptr | a pointer to the vector_dist structure |
Definition at line 366 of file VCluster.hpp.
|
inlineprivate |
Prepare the send buffer and send the message to other processors.
op | Operation to execute in merging the receiving data |
T | sending object |
S | receiving object |
send | sending buffer |
recv | receiving object |
prc_send | each object T in the vector send is sent to one processor specified in this list. This mean that prc_send.size() == send.size() |
prc_recv | list of processor from where we receive (output), in case of RECEIVE_KNOWN muts be filled |
sz_recv | size of each receiving message (output), in case of RECEICE_KNOWN must be filled |
opt | Options using RECEIVE_KNOWN enable patters with less latencies, in case of RECEIVE_KNOWN |
Definition at line 171 of file VCluster.hpp.
|
inlineprivate |
Process the receive buffer.
op | operation to do in merging the received data |
T | type of sending object |
S | type of receiving object |
prp | properties to receive |
recv | receive object |
sz | vector that store how many element has been added per processors on S |
sz_byte | byte received on a per processor base |
op_param | operation to do in merging the received information with recv |
Definition at line 398 of file VCluster.hpp.
|
inline |
reorder the receiving buffer
prc | list of the receiving processors |
sz_recv | list of size of the receiving messages (in byte) |
processor
position in the receive list
default constructor
needed to reorder
Definition at line 692 of file VCluster.hpp.
|
inlineprivate |
Reset the receive buffer.
Definition at line 297 of file VCluster.hpp.
|
inline |
Semantic Gather, gather the data from all processors into one node.
Semantic communication differ from the normal one. They in general follow the following model.
Gather(T,S,root,op=add);
"Gather" indicate the communication pattern, or how the information flow T is the object to send, S is the object that will receive the data. In order to work S must implement the interface S.add(T).
T | type of sending object |
S | type of receiving object |
send | Object to send |
recv | Object to receive |
root | witch node should collect the information |
prc | processors from witch we received the information |
sz | size of the received information for each processor |
Definition at line 495 of file VCluster.hpp.
|
inline |
Semantic Gather, gather the data from all processors into one node.
Semantic communication differ from the normal one. They in general follow the following model.
Gather(T,S,root,op=add);
"Gather" indicate the communication pattern, or how the information flow T is the object to send, S is the object that will receive the data. In order to work S must implement the interface S.add(T).
T | type of sending object |
S | type of receiving object |
send | Object to send |
recv | Object to receive |
root | witch node should collect the information |
Definition at line 450 of file VCluster.hpp.
|
inline |
Semantic Scatter, scatter the data from one processor to the other node.
Semantic communication differ from the normal one. They in general follow the following model.
Scatter(T,S,...,op=add);
"Scatter" indicate the communication pattern, or how the information flow T is the object to send, S is the object that will receive the data. In order to work S must implement the interface S.add(T).
T | type of sending object |
S | type of receiving object |
send | Object to send |
recv | Object to receive |
prc | processor involved in the scatter |
sz | size of each chunks |
root | which processor should scatter the information |
Definition at line 621 of file VCluster.hpp.
|
inline |
Semantic Send and receive, send the data to processors and receive from the other processors.
Semantic communication differ from the normal one. They in general follow the following model.
Recv(T,S,...,op=add);
"SendRecv" indicate the communication pattern, or how the information flow T is the object to send, S is the object that will receive the data. In order to work S must implement the interface S.add(T).
T | type of sending object |
S | type of receiving object |
send | Object to send |
recv | Object to receive |
prc_send | destination processors |
prc_recv | list of the receiving processors |
sz_recv | number of elements added |
opt | options |
Definition at line 797 of file VCluster.hpp.
|
inline |
Semantic Send and receive, send the data to processors and receive from the other processors asynchronous version.
Semantic communication differ from the normal one. They in general follow the following model.
Recv(T,S,...,op=add);
"SendRecv" indicate the communication pattern, or how the information flow T is the object to send, S is the object that will receive the data. In order to work S must implement the interface S.add(T).
T | type of sending object |
S | type of receiving object |
send | Object to send |
recv | Object to receive |
prc_send | destination processors |
prc_recv | list of the receiving processors |
sz_recv | number of elements added |
opt | options |
Definition at line 858 of file VCluster.hpp.
|
inline |
Semantic Send and receive, send the data to processors and receive from the other processors (with properties)
Semantic communication differ from the normal one. They in general follow the following model.
SSendRecv(T,S,...,op=add);
"SendRecv" indicate the communication pattern, or how the information flow T is the object to send, S is the object that will receive the data. In order to work S must implement the interface S.add<prp...>(T).
T | type of sending object |
S | type of receiving object |
prp | properties for merging |
send | Object to send |
recv | Object to receive |
prc_send | destination processors |
prc_recv | processors from which we received |
sz_recv | number of elements added per processor |
sz_recv_byte | message received from each processor in byte |
Definition at line 901 of file VCluster.hpp.
|
inline |
Semantic Send and receive, send the data to processors and receive from the other processors (with properties)
Semantic communication differ from the normal one. They in general follow the following model.
SSendRecv(T,S,...,op=add);
"SendRecv" indicate the communication pattern, or how the information flow T is the object to send, S is the object that will receive the data. In order to work S must implement the interface S.add<prp...>(T).
T | type of sending object |
S | type of receiving object |
prp | properties for merging |
send | Object to send |
recv | Object to receive |
prc_send | destination processors |
prc_recv | list of the processors from which we receive |
sz_recv | number of elements added per processors |
Definition at line 1004 of file VCluster.hpp.
|
inline |
Semantic Send and receive, send the data to processors and receive from the other processors.
Semantic communication differ from the normal one. They in general follow the following model.
SSendRecv(T,S,...,op=add);
"SendRecv" indicate the communication pattern, or how the information flow T is the object to send, S is the object that will receive the data. In order to work S must implement the interface S.add<prp...>(T).
op | type of operation |
T | type of sending object |
S | type of receiving object |
prp | properties for merging |
send | Object to send |
recv | Object to receive |
prc_send | destination processors |
op_param | operation object (operation to do im merging the information) |
recv_sz | size of each receiving buffer. This parameters are output with RECEIVE_KNOWN you must feed this parameter |
prc_recv | from which processor we receive messages with RECEIVE_KNOWN you must feed this parameter |
opt | options default is NONE, another is RECEIVE_KNOWN. In this case each processor is assumed to know from which processor receive, and the size of the message. in such case prc_recv and sz_recv are not anymore parameters but must be input. |
Definition at line 1117 of file VCluster.hpp.
|
inline |
Semantic Send and receive, send the data to processors and receive from the other processors asynchronous version.
Semantic communication differ from the normal one. They in general follow the following model.
SSendRecv(T,S,...,op=add);
"SendRecv" indicate the communication pattern, or how the information flow T is the object to send, S is the object that will receive the data. In order to work S must implement the interface S.add<prp...>(T).
op | type of operation |
T | type of sending object |
S | type of receiving object |
prp | properties for merging |
send | Object to send |
recv | Object to receive |
prc_send | destination processors |
op_param | operation object (operation to do im merging the information) |
recv_sz | size of each receiving buffer. This parameters are output with RECEIVE_KNOWN you must feed this parameter |
prc_recv | from which processor we receive messages with RECEIVE_KNOWN you must feed this parameter |
opt | options default is NONE, another is RECEIVE_KNOWN. In this case each processor is assumed to know from which processor receive, and the size of the message. in such case prc_recv and sz_recv are not anymore parameters but must be input. |
Definition at line 1185 of file VCluster.hpp.
|
inline |
Synchronize with SSendRecvP_op.
Definition at line 1328 of file VCluster.hpp.
|
inline |
Semantic Send and receive, send the data to processors and receive from the other processors (with properties) asynchronous version.
Semantic communication differ from the normal one. They in general follow the following model.
SSendRecv(T,S,...,op=add);
"SendRecv" indicate the communication pattern, or how the information flow T is the object to send, S is the object that will receive the data. In order to work S must implement the interface S.add<prp...>(T).
T | type of sending object |
S | type of receiving object |
prp | properties for merging |
send | Object to send |
recv | Object to receive |
prc_send | destination processors |
prc_recv | processors from which we received |
sz_recv | number of elements added per processor |
sz_recv_byte | message received from each processor in byte |
Definition at line 961 of file VCluster.hpp.
|
inline |
Semantic Send and receive, send the data to processors and receive from the other processors (with properties) asynchronous version.
Semantic communication differ from the normal one. They in general follow the following model.
SSendRecv(T,S,...,op=add);
"SendRecv" indicate the communication pattern, or how the information flow T is the object to send, S is the object that will receive the data. In order to work S must implement the interface S.add<prp...>(T).
T | type of sending object |
S | type of receiving object |
prp | properties for merging |
send | Object to send |
recv | Object to receive |
prc_send | destination processors |
prc_recv | list of the processors from which we receive |
sz_recv | number of elements added per processors |
Definition at line 1062 of file VCluster.hpp.
|
inline |
Synchronize with SSendRecvP.
Definition at line 1247 of file VCluster.hpp.
|
inline |
Synchronize with SSendRecvP.
Definition at line 1286 of file VCluster.hpp.
|
inline |
Synchronize with SSendRecv.
Definition at line 1208 of file VCluster.hpp.
|
private |
Definition at line 61 of file VCluster.hpp.
|
private |
Definition at line 121 of file VCluster.hpp.
|
private |
Definition at line 72 of file VCluster.hpp.
|
private |
Definition at line 71 of file VCluster.hpp.
|
private |
Definition at line 77 of file VCluster.hpp.
|
private |
Definition at line 69 of file VCluster.hpp.
|
private |
Definition at line 67 of file VCluster.hpp.
|
private |
Definition at line 68 of file VCluster.hpp.
|
private |
Definition at line 64 of file VCluster.hpp.