Thepitching flap WEC hinged close to the bottom of the ocean is known as an oscillating wave surge converter (OWSC) (Babarit et al. 2012). It is mentioned by Babarit et al. (2012) and Folley et al. (2007b) that OWSCs are designed for shallow waters to attain higher horizontal velocities and pitching motions. Several experimental, analytical and numerical studies of OWSCs have been reported. The experimental studies of OWSCs are mostly performed for scaled model devices in the wave tank. Henry et al. (2014b) reported the experimental study of a 1/25th-scale OWSC model along with numerical simulations, using both OpenFOAM (conventional CFD) and a SPH method. The time histories of the OWSC rotation angle between the three methods showed good agreement. The time evolution of pressure on two sensors from the SPH simulation was compared with the experimental data and good agreement was achieved. Henry et al. (2014a) performed two-dimensional experiments on an 1/40th-scale OWSC model and compared the results with numerical simulations. It was concluded that the slamming of the model is related to the classic wedge water entry problem (Oger et al. 2006; Zhao et al. 1996). Clabby and Tease (2015) performed a series of experiments to explore the extreme events related to a 1/20th-scale OWSC model. It was reported that the extreme pressure occurs during the breaking waves or re-entry slamming of the flap.
It is worth pointing out that the experimental studies are extremely important to study OWSC devices due to the complex phenomena involved and to validate numerical simulations. However, it is costly to perform parametric studies, such as changing flap size, wave conditions and tank dimensions in the experimental campaigns. Therefore, numerical simulations are also extremely important to efficiently design the experiments. Potential flow methods have been extensively applied to ocean engineering problems. Although these methods are restricted to solving linear inviscid equations, they provide valuable insight for the problem in a reasonable time. Studies based on potential methods applied to OWSCs can be found in Folley et al. (2007a) and Renzi and Dias (2012, (2013). OWSCs are designed for shallow waters, hence they may experience extreme wave loads as mentioned by Wei et al. (2015) and Henry et al. (2014b). Also, the interactions between waves and OWSC may include complex phenomena such as slamming, wave over-topping, air entrainment and turbulence (Wei et al. 2015). Therefore, potential flow methods have limitations in capturing the details, especially the nonlinearities, involved in the interactions of OWSCs and shallow waters.
Rafiee and Dias (2013) performed 2D and 3D simulations of wave interactions with OWSC using SPH. The \(k-\epsilon \) turbulence model was used along with the SPH method to study the effects of wave loads on the OWSCs. It was concluded that 3D simulations provide more accurate estimation of the pressure peaks and angles of rotation compared to the 2D simulations.
In this paper, we report our work on wave interaction with an OWSC device. A custom SPH method was implemented using parallel computing and an incompressible formulation of the governing equations. The methodology we followed and the parallel scheme used to implement a new OpenMP SPH are discussed in Sects. 2.1 and 3, respectively. A classical wedge water entry problem is presented in Sect. 4.2 as a slamming benchmark test case. The experimental setup simulated is described in Sect. 4.3.1. The numerical results of the simulations are compared with the available experimental data in Sect. 4.3.2. The performance of the new parallel SPH code is also reported for a dam break on a tall square structure (Sect. 4.1).
Applying solid boundary conditions is the most challenging task in the SPH method. The three main methods reported to simulate solid boundaries in SPH are: repulsive boundary particles (Monaghan 1994; Monaghan and Kos 1999), dummy boundary particles (Koshizuka et al. 1998; Lo and Shao 2002) and ghost boundary particles (Colagrossi and Landrini 2003). In the repulsive boundary particles approach, a single line of boundary particles are placed on the edge of the solid boundary, exerting a repulsive force on the fluid particles approaching them. In the dummy boundary particles approach, several layers of particles are placed on the edge and inside the solid boundary. In the ghost boundary particles approach, the position of ghost particles is determined by reflection of the fluid particles position through the solid boundary. The pressure of the ghost particles are the same as their corresponding fluid particles (in the presence of the gravity, there will be an additional hydrostatic pressure). Each of these methods has advantages and drawbacks in terms of accuracy and computational complexity.
In this paper, fixed dummy particles are used both for solid boundary particles on tank walls and on the OWSC flap. The dummy particles have the advantage of being easy to implement, especially in a parallel SPH method. In the current work, we used the method described by Adami et al. (2012) to calculate the pressure for the boundary particles from the surrounding fluid particles as
The SPH method is typically computationally more expensive than Eulerian-based CFD methods. Therefore, parallelization methods are required to improve the performance of the method, especially for 3D simulations. CPU-based and GPU-based parallelizations are the two main techniques that can be employed for SPH parallelization (Hermanns 2002). The CPU-based parallelization is divided into shared-memory and distributed memory parallelizations. The shared-memory approach assumes that the processing units share a common memory (as is the case for multi-core processors) that the parallel tasks can use to communicate and share variables with each other. The thread model is usually used when implementing a shared memory parallelization. More specifically, OpenMP, a standard for implementing the thread model by adding directives to the code, is a relatively easy way to parallelize an existing serial code. The distributed memory method uses the common memory assumption and requires the parallel tasks to communicate by exchanging messages. MPI (Message Passing Interface) is a standard for distributed memory parallelization. GPU-based parallelization relies on GPUs to schedule and execute the parallel tasks. CUDA, openCL and openACC are the common programming standards for GPU-based implementations. Several approaches have been applied to parallelize the SPH method using these standards. Ferrari et al. (2009) proposed a parallelization schemes using the MPI standard to study free surface flows. Marrone et al. (2012) studied ship wave breaking patterns using 3D hybrid MPI and OpenMP standards. A review of CPU-based parallelization implementations for the SPH method in free surface flows is available by Gomez-Gesteira et al. (2012). GPUs have been applied to SPH methods recently. A review of GPU-based parallelization implementations for the SPH method is available by Crespo et al. (2015).
The SPH method is both Lagrangian and meshless. Although these two features are attractive in modelling complex free surface flows, they cause difficulties in parallelization schemes (Marrone et al. 2012). Unlike the fixed grids in CFD mesh-based methods, particles move due to the Lagrangian nature of the method and the neighboring particles do not remain the same throughout the simulation. Hence, as mentioned by Marrone et al. (2012), the parallel scheme applied to the SPH method must take into account this specific characteristic.
To save computational costs in SPH, only the contribution of neighboring particles \( (r_ij\le k h) \) are calculated in the simulation. The link list searching algorithm reported in Gomez-Gesteira et al. (2012) is adopted here to search for the neighboring particles. In this algorithm, the computational domain is divided into square cells of side kh (kernel radius). The particle in each cell only interacts with particles in neighboring cells; eight cells in 2D are shown in Fig. 1. The sweep of the link list search starts from the lower left end and in each sweep, only the E, N, NW, NE cells are involved to prevent repeating particle interactions (4 cells out of 8 neighboring cells Gomez-Gesteira et al. 2012). The same procedure will be applied in 3D; interactions of 13 cells out of 26 neighboring cells will be calculated. In the current work, we take the advantage of this approach to parallelize the code using an OpenMP standard.
Due to the Lagrangian nature of the method special treatments are required at the particles along the processor domain boundaries. These particles may require information from the neighboring particles located on another processor domain. This is handled by introducing ghost cells by Gomez-Gesteira et al. (2012) or buffer particles by Marrone et al. (2012). In this paper, the domain decomposition is performed spatially in 3D as shown in Fig. 1 for a 2D case (for simplicity) but the same applies for the 3D case. Here, we divide the cells in each thread to be the inner cells and the outer cells. The last cell in each thread is assigned to be the outer cell. The outer cells are available for both threads. The domain decomposition is performed in this way in order to avoid two or more parallel threads having access to the same data. Since each thread will update its particles, we need to make sure that the other thread has access to the old values instead of the new ones.
Dam-break problems are typically used as benchmark test cases for SPH codes. In this paper, the dam-break on a tall structure is first simulated to test the performance of the new OpenMP SPH code. The dam-break benchmark studies are important to investigate the influence of severe flooding events such as tsunamis on shoreline structures. The experimental set up of Yeh and Petroff reported by Gmez-Gesteira and Dalrymple (2004) is used to validate the parallel OpenMP SPH code. Dimensions of the experiment and the tall square structure are shown in Fig. 2. A layer of approximately 1 cm of water existed on the bottom of the tank, before the dam breaks, at \( t=0\) s. In the experiment, as mentioned in Gmez-Gesteira and Dalrymple (2004), the velocity in the x-direction was measured at 2.6 cm from the bottom of the tank and 14.6 cm upstream of structure center.
3a8082e126