CUDA SPH

SPH_1

Simulating the collision between fluids as solids under gravity.

The use of Graphics Processing Units (GPUs)in computationally intensive applications is increasing due to recent frameworks that allow general purpose programming to be done on the specialized hardware of the GPU. This project summarizes the use of a GPU to improve the performance of a Smoothed Particle Hydrodynamics (SPH) program.

SPH is a computational method of simulating the behaviour of particles. Particles can be used to represent objects as a collection of points where physical properties are known. The interaction of particles in a system of objects can be simulated by interpolation using local particle neighbourhoods.

The Compute Unified Device Archtecture (CUDA) was used to develop a parallel implementation of the SPH algorithm developed by Alfonso Gastelum Strozzi. CUDA is the framework that allows general purpose code to be executed on NVIDIA GPUs.

So far, the physical calculations have been ported to the GPU and current work involves developing parallel octrees to efficiently search for particle neighbourhoods. Performance on physical calculations have been approximately 4x. Expected performance is to be at least 10x. This will be the goal of optimisation part of this research where the algorithms will be refined and hardware support maximised.

Belief Propagation Based Stereo

tsukuba

Tsukuba

venus

Venus

The belief propagation based stereo approach approximates the minimum energy solution on graphical models such as Markov Chains, or Markov Random Field (MRF) of disparities. Our approach exploits a symmetric Cyclopean matching model, which accounts for visibility conditions, to construct epipolar profiles which are close to the human perception. Unlike traditional asymmetric matching models, this model can construct disparity maps with respect to the left, right or Cyclopean reference frame, as well as a Cyclopean image of a 3D scene depicted in a stereo pair, simultaneously.

We focused on both one-dimensional (1D), and two-dimensional (2D) belief propagation. 1D belief propagation has the advantage of fast computation, and low memory usage, but suffers matching errors due to the lack of vertical information. 2D belief propagation is more memory intensive, and has slower computation speed, but it can achieve high quality results using the powerful 2D message passing, where matching information is passed around the MRF, and decisions are made using all the image information.

The results of symmetric 2D belief propagation are shown.

Real-time 3D hand tracking for 3D sculpting application

Here is a pair of stereo frames taken by the stereo webcam. The green bounding boxes indicate our tracking subject – the yellow ball.

Although most of the existing 3D sculpting programs (e.g. Pixologic Z-brush, Autodesk Mudbox, etc.) can be controlled by special hand-controlled hardware such as a conventional or 3D mouse, or a tablet, it will be more convenient and natural to let users sculpt 3D models just by motions and poses of bare hands. Such a control interface allows the traditional clay sculptors or people with no sculpting knowledge to utilize the above digital programs readily without prior training. Therefore, an image-based hand tracking system can turn the device-free control interface into reality.

This Masters project aims to develop a real-time stereo system for hand tracking by means of a pair of synchronized video cameras in order to operate a 3D sculpting program. To reduce the project’s complexity, we aim to use marker-based tracking, which requires users to wear gloves with special colour patterns to be tracked.

web_blender256_screencap

The tracking unit produced 3D coordinates by triangulation and sent to Blender via TCP connection to move the yellow cube inside the 3D viewport of Blender.

At this stage, we have a simple working prototype of the system created. This prototype is connected to the Minoru stereo web cam for live-stream video input. It is capable of tracking more than one object at the same time via the Continuous Adaptive Mean Shift (CAMShift) algorithm. The current CAMShift algorithm adapted source code from OpenCV Library.

3D coordinates generated by the tracking unit are sent continuously via a TCP connection to Blender. Blender is selected as our choice for 3D sculpting interaction because it is the only open source software package coming with sculpting support and has free access to its source codes for further customization. By means of a customized Blender Python script, the tracked coordinates are then used to control the movements of some basic 3D cubes within Blender in real time. Different programming assets such as camera calibration, real-time video streaming from camera, etc. from the Intelligent Vision Systems (IVS) research group of the Department of Computer Science were used to complete the current prototype.