1-2hit |
Matthias WOLF Timo VOGEL Peter WEIERICH Heinrich NIEMANN Christopher NIMSKY
Functional magnetic resonance imaging (fMRI) allows to display functional activities of certain brain areas. In combination with a three dimensional anatomical dataset, acquired with a standard magnetic resonance (MR) scanner, it can be used to identify eloquent brain areas, resulting in so-called functional neuronavigation, supporting the neurosurgeon while planning and performing the operation. But during the operation brain shift leads to an increasing inaccuracy of the navigation system. Intraoperative MR imaging is used to update the neuronavigation system with a new anatomical dataset. To preserve the advantages of functional neuronavigation, it is necessary to save the functional information. Since fMRI cannot be repeated intraoperatively with the unconscious patient easily we tried to solve this problem by means of image processing and pattern recognition algorithms. In this paper we present an automatic approach for transfering preoperative markers into an intraoperative 3-D dataset. In the first step the brains are segmented in both image sets which are then registered and aligned. Next, corresponding points are determined. These points are then used to determine the position of the markers by estimating the local influence of brain shift.
Ingo SCHOLZ Joachim DENZLER Heinrich NIEMANN
The classic light field and lumigraph are two well-known approaches to image-based rendering, and subsequently many new rendering techniques and representations have been proposed based on them. Nevertheless the main limitation remains that in almost all of them only static scenes are considered. In this contribution we describe a method for calibrating a scene which includes moving or deforming objects from multiple image sequences taken with a hand-held camera. For each image sequence the scene is assumed to be static, which allows the reconstruction of a conventional static light field. The dynamic light field is thus composed of multiple static light fields, each of which describes the state of the scene at a certain point in time. This allows not only the modeling of rigid moving objects, but any kind of motion including deformations. In order to facilitate the automatic calibration, some assumptions are made for the scene and input data, such as that the image sequences for each respective time step share one common camera pose and that only the minor part of the scene is actually in motion.