key: cord-0033979-46fmw0in authors: nan title: Abstracts of the EPSM-ABEC 2008 conference date: 2008 journal: Australas Phys Eng Sci Med DOI: 10.1007/bf03178607 sha: 0f414006810b8fe0d26d0e146e6bbda1e68be1f1 doc_id: 33979 cord_uid: 46fmw0in nan Discussion: For the simple inhomogeneity block phantom, convCT profile is close to expected. Both CBCT profiles show good agreement for the lung tissue, but very poor for the bone, with normal tissue values significantly different from the expected zero. Full fan scan gives better results than half fan scan. Further investigation will consider the effect of scatter on the calibration method for half fan. Point dose calculations show significant difference from measured for inhomogeneities with Eclipse AAA algorithm. The extreme cases in this simple phantom will be extended with further work on anthropomorphic phantom such as "Elvis" pelvis phantom. Conclusion: CBCT data provides much information for dose assessment. Further work on calibration and comparison with measured doses will determine the dosimetric tolerance possible. This is part of a larger project looking at the role of adaptive planning. This work was supported by NSW Cancer Institute Clinical research grant. disadvantage get compromised, then the plan may be benefit the better effective depth for the beam influenced by high density artificial tissues, and another significant point could be if using CBCT scanned images for direct planning could improve the 3D registration quality while IGRT treatment as more uniformity position and reliable setting up to the patient in treatment as the same image acquiring technique and same setup be used at same time. Conclusion: MV-CBCT can provide reliable electron density calibration, the image quality potentially good enough to be used to perform direct 3D planning process. Introduction: The topic deals with phenomena like arrhythmia (bradycardia, tachycardia) and sudden cardiac death, especially with atrial flutter. The reasons of these phenomena will be suppose in ionic-and electrochemical processes of cell metabolism, the synchronisation between neighbouring cells, as well as the electromechanical coupling to neighbouring cells. Methods: By means of the relation between intra-and extracellular concentration of ions will be generated a time-dependent membrane potential. Normally this membrane potential ) m (c) is periodically or quasi periodically. Furthermore the reasons for the occurence of "atypical" auto oscillators in the working myocardium are supposed to be a consequence of the fluctuations of ionic-concentration c k . Every cell of heart will be excited periodically phase shifted to her neighbours by "All or Nothing Law" and return again into her resting state under the influence of electrogenic pumps. We can summarize a nonlinear differential equation system of order 3: Introduction: Over the years, the role of the right ventricle (RV) of the heart in maintaining normal overall haemodynamics and interventricular dependence has become much more recognized. Measurements of the shape and the regional deformation of not only the left ventricle (LV), but also the RV, are essential for quantifying normal and impaired cardiac function. Currently, magnetic resonance imaging (MRI) is considered to be the most accurate method for assessing the size and function of the RV due to its excellent image quality and reproducibility, especially in right ventricular dysplasia. Cardiac Image Modeller (CIM), a software package developed by the Auckland MRI Research Group (AMRG), enables rapid visualization and analysis of MR images in 3D space and through time. Through interactive fitting of a deformable model of the LV to MR images, mass, volume and ejection fraction of the LV at end-diastole and end-systole can be obtained (see Fig. 1 (a) & (b)). It is the aim of this study to extend the CIM interactive modelling techniques developed by AMRG to fast and efficient clinical analysis of both LV and RV deformation. A geometric finite element model of both ventricles of an isolated arrested pig heart developed previously [1] consists of 88 elements, each with tricubic interpolation in circumferential, longitudinal and transmural coordinates. The large number of degrees of freedoms (DOFs) in this model may compromise the speed of clinical analysis. Thus, in order to reduce the number of DOFs, the model was set to be bicubic in circumferential and longitudinal coordinates, and linear in Table 1 . Percentage of image pixels within 3%, 3 mm for ten clinical IM. A comprehensive, physics-based fluence model to predict portal dose images for both static and IMRT fields has been designed. The dose calculation accounts for the EPID energy response and possesses a detailed two-source model for fluence prediction. This model demonstrates the accuracy needed for IMRT portal dose image prediction in complex clinical examples (< 3%, 3 mm) and can be used for pre-treatment verification. The fluence model incorporated several significant improvements compared to a TPS fluence model. The fluence model improvements described here could potentially be incorporated into patient specific dose calculation algorithms. Future work will implement this model with a previously developed algorithm that predicts patient scatter energy fluence in the EPID. Introduction: Backscattered radiation from the support arm structures of the Varian amorphous silicon EPID (Varian, Palo Alto, CA) has been found to affect dose by up to 5% for large field sizes, particularly at the superior end of the detector where the support arm components lie. The backscattered component (BSC) is largest in fields that irradiate the maximum amount of the support arm eg the flood-field (FF). The FF calibration image therefore only compensates for BSC for an acquired image of the same field size. For smaller fields the BSC in the FF correction image will be larger than in the acquired image introducing dosimetric artifacts. In this work we characterize the BSC with field size, and investigate whether removing the BSC of the FF will improve the accuracy of EPID dosimetry for smaller fields. Methods: Square field images of fields of size 2.5, to 27.5 cm 2 in 2.5 cm 2 increments as well as a 40x30 cm 2 full detector image were acquired with the EPID centred on the support arm as normally employed. The irradiations were made using flood-field acquisition mode where only dark-field corrections are applied by the software for both 6 and 18 MV photons. The EPID was then removed from the support arm (E-Arm & R-Arm types) and the irradiations repeated. The ratio of the images was calculated to yield the BSC for each field size. Results: Figure 1 (a) shows the BSC for the FF (40x30 cm 2 ) irradiation (E-Arm type). The maximal backscatter occurs directly above the central support arm section with an increase of 6% in signal. This introduces a large gradient in the inplane direction to the FF and a smaller artifact in the crossplane direction which are then applied to all FF-corrected acquired images. Figure 2shows the influence of backscatter from the support arm to the central axis. If the FF is not applied, a gradual increase in the BSC with field size is observed. However, if the images are corrected with the normal FF, the opposite effect can be seen. (circular marker) FF correction Discussion: The FF correction image introduces signal and gradient artifacts due to backscatter from the support arm. When the EPID is used in standard configuration, the increased signal from the FF adversely affects smaller images. Figure 2 shows that for EPIDs which use the E-arm type support structure, it is more accurate not to use the FF calibration for square field sizes less than 10 cm whereas larger field sizes benefit from incorporating the FF. This increases to a square field size of approximately 18 cm for the R-arm type support structure. Conclusions: Removing the BSC of the FF image will improve the accuracy of EPID dosimetry for some clinically relevant field sizes. This provides a simple solution to improve the accuracy of EPID dosimetry with the Varian device and will improve the accuracy of IMRT verification. M. Madebo*, T. Kron, A. Perkins, C. Fox and P. Johnston 1 Peter MacCallum Cancer Centre, East Melbourne, Australia 1 Introduction: Electronic portal imaging devices are considered an essential accessory for linear accelerators by virtually all radiotherapy departments. This provides scope to use them as 'in-built' quality assurance devices which by their very nature are readily available, real-time and in a well defined geometric relation to the linac beam. This has particularly been of interest, as film processors are phased out of hospitals in favour of digital imaging equipment. Owing to this, we investigated the viability of a-Si EPIDs as a physics tool for routine QA in determining junction dose of X-ray fields at different gantry angles, something which would be difficult to do using other dosimetric techniques, collimator angle and source-to-detector distance (SDD). Methods: Measurements were performed using a-Si500 EPID mounted on a 2100C/D and a 21iX linac (Varian, Palo Alto, CA) with nominal photon energy of 6MV. Measurements were performed at different collimator and gantry angles and varying SDDs. No additional build-up material was used. High quality acquisition mode (10 frames for 6MV) was selected. The nominal dose rate was 600MU/min. Asymmetric jaw settings were used with one jaw set to 0 and the other to 10cm. The other pair of jaws was set symmetrically to 20cm opening. A large number of individual images were acquired and exported in DICOM format to ImageJ software and in-house made software for further evaluation. Results: The following parameters have been verified by adding two images to form a junction. x Small collimator rotations were introduced to mimic misalignment of jaws and misalignments of as low as 0.1 degree have been observed. x Dependence of junction dose at gantry angles 0, 90, 180 and 270 degree has been confirmed. x Slight variation of junction dose has also been observed for different SDDs. Discussion: For gantry angle = 0 degree, comparison of junction dose using EPID and film proves that EPID can be used for routine physics QA purpose. However, EPIDs can also be used to check collimator misalignment as well as junction dose at any gantry angle of practical importance, which is difficult with films. Our investigation shows jaw misalignment of even 0.1 degree can be detected. Further more, the effect of gantry sag has been observed at gantry angles 90 and 270 degree where the junction dose was found to be consistently higher. This is because the jaw closer to the floor sags more than the other jaw and as a result the opening between jaws increases to give higher junction dose. Finally, the study shows that reproducibility of jaw position can easily be accessed. Conclusions: The use of EPID for verification of X-ray junction dose at various gantry angles and jaw misalignment proved to be flexible and effective. A large number of single images can be acquired quickly and as a result, a large number of parameters can be determined by combining appropriate images. Future studies will extend the work to spoke shots and the assessment of multileaf collimator settings. quite often not until after it has finished. In addition, any discovered error in the target position using this method can be attributed with equal validity to either an actual setup error or simply to movement of the film between exposures. Following the installation of a Siemens Oncor Impression linear accelerator fitted with an electronic portal-imaging device (EPID), a new method for determining the location of the target in BRW space has been developed. This involves acquiring pretreatment AP and lateral double-exposed electronic portal images and using them with a custom-developed software application to calculate the target position. Methods: Testing of the software application was performed by inputting coordinates from pre-acquired portal films that had been previously analysed using the SCS1 and comparing the resulting target coordinates. To test system accuracy, portal images using both film and EPID were acquired for multiple target positions using the BRW reference frame and a custom phantom and analysed using the SCS1 or software application. Timing was determined by taking the average time of completion from acquisition of the first image to calculation of the final result. The software application reproduced the SCS1-calculated target position to within 0.01 r0.005 mm. The effectiveness of the new EPID method was compared against the film and digitiser method using both the BRW system and a custom-made phantom and was found to be faster (approximately 6 minutes versus 17 minutes, respectively) and to be of equivalent accuracy in determining the target position (mean RMS of 0.2 r0.1 mm vs. 0.3 r0.1 mm). The shortening of the time required to complete the QA using the new method allows a transition from a post-to a pre-treatment check and also eliminates ambiguity in the potential source of error of the target position. Conclusion: Moving from single fraction SRS QA performed with film/digitiser to EPID images/software application is highly recommended due to advantages in workflow and patient safety without compromising accuracy. Roy Eagleson 1,2,3 and Terry Peters 1,2,3 Introduction: Surgical Technologies are advancing rapidly, and an exciting front is being lead by developments in the visualization of biomedical images and through innovations in Robotics. In addition to providing support for the diagnosis of many diseases, specific imaging modalities can be utilized for surgical training, planning, and Virtual Reality for guiding interventions. In general, multiple imaging modalities can be combined in an augmented display, through superimposed volumetric displays, multiple screens, interactive exploration, and haptic feedback. We have been developing specific systems for the display of pre-operative CT and MRI along with intra-operative Ultrasound integrated with surgical tool tracking using magnetic position sensors. Methods: In the design of Human-Computer Interfaces, it is important to consider the capacities and limitations of both the computer system and the human operator. Our research interests span the domains of Software Engineering and Experimental Psychology. Results from both areas are needed in order to develop immersive 'virtual' environments. We regard the design of HCI systems as an enterprise activity --it is a user-centred process which is built on multiple iterations of a design-implement-test-refine loop. This design cycle includes evaluative research ('human factors'), and needs to be informed by basic scientific research in Perception, Cognition, and Motor Control. In order to unify these approaches for applications in HCI, we must considering formal methods for linking Science and Engineering disciplines. Carroll and Rosson (1996) have articulated a method for conducting parallel activities in Software Engineering and Cognitive Science. They have proposed a method for representing the formal artifacts of a design process using Object-Oriented Software analysis methods, along with descriptions which capture, in their words, "causal schemas" --the former are descriptions of how software objects interact to form working systems, and the latter are claims about the user-centred phenomena associated with these design artifacts (in the user interface), which have certain properties that can be linked with specific psychological and perceptual phenomena. Carroll and Rosson describe the resulting process as an 'action science,' but we think of it as nothing more than a prescription for how to drive research along on both fronts, in a way which facilitates linkages between the two. We have embraced this innovative approach as a candidate method for designing applications involving tele-robotic systems, for biomedical visualization, and for surgical training. Results: When several imaging modalities are combined from disparate displays onto one screen using Augmented Reality presentation, the resulting scene dynamics and translucency effects open up a number of empirical questions. Recent theories of perceptual integration (more specifically, the formation of subjective contours in stereoscopic presentation, with structurefrom-motion, texture gradients, and occlusion boundaries) inform us that the design of our displays need careful consideration of the capacities and constraints of the human visual and cognitive faculties. Furthermore, in order to assess the usability of these enhanced systems objectively, we must make use of methodologies from human-computer interface design, perceptual testing, and visual-motor paradigms from experimental psychology. Discussion: Our current User-Interface and Psychology research project involves examining individual differences in strategy choice on spatial tasks, their aptitude for spatial reasoning, and how these differences can best be supported when designing computer interfaces for surgical training. We are developing user interface software according to the principle that users may prefer different interaction modes and viewpoints depending on which subtask is performed within an overarching surgical procedure. We are using workflow analysis to rank these subtasks and categorize according to factors of Diagnosis, Planning, Wayfinding, Navigation, Manipulation, and Retrieval. These implicate different usage, and different display modalities, when performing a task using a 'virtualized' interface. We are investigating specific individual differences in spatial reasoning and using the results to design user interfaces which are adaptable to the individual. Introduction: Movements in people with Parkinson's disease are often hypometric, although we shown that this was not the case in an experimental visually-guided reaching task. However, these movements may also be intrinsically hypometric but visual feedback may be used to get accurately to the target in a single, albeit complex, movement. This hypometria may be more pronounced in memory-guided movements that involve the basal ganglia to a greater degree. Decomposing movements into individual components should reveal the strategy that is used. A fast movement to a fixed target can be thought of as being composed of one or more submovements, each of which is preprogrammed and generated by an inverse internal model. The first of these is called the initial, or primary, submovement. We wished to explore our hypotheses that (1) people with Parkinson's disease produce hypometric primary submovements but (2) are able to use visual feedback to accurately reach the target in a single overall movement and (3) this effect may be greater in memory-guided tasks in which an internal representation of the target location is used instead of a fixation-centered representation of the target. Methods: A Movement and Virtual Environment (MoVE) system, comprising a calibrated, low-latency, near-field 3D virtual environment with an electromagnetic tracker, was used to run the experiment and record the data. Visually-and memoryguided reaching movements were examined in 22 people with mild to moderate severity Parkinson's disease on medication, along with age-matched and sex-matched controls. Primary submovements were extracted from 5149 movements using a method based upon zero crossings of jerk (3rd derivative of position), with several additional criteria to minimize the detection of submovements due to noise or tremor. The gains of the primary and final movements were then calculated. A linear mixed-effects model was used for the multiple dependent variables, with fixed effects of Group and Task, and a random effect of subject. Discussion: The final gain in both tasks was not different between groups, although there was a task effect. On the memoryguided task, overall gains were smaller, related to spatial memory underestimating the distance to the target. In comparison to the final gain, the gain of the primary submovement was found to be substantially smaller in the Parkinson's disease group compared to controls in both tasks. Also, while the gain of the primary submovement was equal in both tasks for the control group (even though the final gain of the memory-guided movement was smaller), the gain was smaller in the memory-guided task compared to the visually-guided task in the Parkinson's disease group (even though the final gain in the memory-guided task was not different to that of controls). This highlights the extra difficulty people with Parkinson's disease have in making a movement of the correct size, especially when the target is represented in spatial memory or is internally generated. In older people and in pathological conditions there are several other contenders for submovements beside a primary submovement and corrective submovement. Firstly, if the multiple limb segments are not coordinated perfectly this can lead to irregularities in the movement traces which are not directly attributable to a separately generated submovement. Additionally, multiple forms of tremor (e.g., rest, action, and essential) all add irregularities to the movement trace and are a form of overlaid unintentional submovements. However, use of criteria that a detection has to be significant in comparison to jitter at rest, which in many cases causes the algorithm to treat the peak velocity of the movement as the peak velocity of the submovement, means that in many cases the algorithm is likely over-estimating the size of the primary submovement in subjects with tremor. Conclusions: Our results show that the underlying primary submovement in visually-guided movements in people with Parkinson's disease is hypometric and that the degree of hypometria is even greater in memory-guided movements. More sophisticated submovement decomposition methods, in combination with additional sensors attached to all upper-limb segments in the analysis, may be able to tease out the contributions of the different joints and the components of the movement that are due to a form of tremor. This will offer greater insight into how the production and size of submovements change in people with Parkinson's disease and may also offer an objective measure to quantify motor improvements due to treatment. Augmented Reality (AR) allows medical data to be overlaid on the real world, providing new methods for medical visualization. However there still is significant research that can be performed in how to interact with AR content. In this presentation we review a variety of AR interaction techniques that may be useful for the medical domain and describe important research directions for the future. Some of these techniques include using lens based interaction, mobile AR displays, and transitional interfaces, among others. Research will be presented from the HIT Lab NZ and other AR research labs worldwide. Introduction: Hyperglycaemia is prevalent in critical care due to the stress of condition, even without a previous history of diabetes. Tight glycaemic control is associated with significantly improved patient outcomes. However, providing tight control is difficult due to evolving patient condition and interactions with common drug therapies resulting in recurring hyperglycaemic episodes. Model-based and model-derived tight control methods, such as the SPRINT system in Christchurch, have shown significant reductions in mortality. However, as computational capability and access improve, there are still avenues of further improvement to be made -if better models and/or methods were available. This research presents an updated control model, and its predictive virtual patient validation for use in real-time glycaemic control. The model is based on prior work in this area by the authors' research group and defines a simple pharmacokinetic and pharmacodynamic system model: In the model, G is the blood glucose level, I is the plasma insulin, and Q is the interstitial insulin. EGP max is the theoretical maximum endogenous glucose production for a patient under no presence of glucose or insulin. Endogenous glucose production (EGP) is suppressed with increasing G and Q. Insulin independent glucose removal (excluding central nervous system uptake CNS) and the suppression of EGP from EGP max with respect to G are represented with p G . In contrast, insulin Introduction: Insulin resistance (IR), or low insulin sensitivity, is a major risk factor in type 2 diabetes and cardiovascular disease. Current tests are either very labour intensive (Euglycaemic clamp, IVGTT) or have too low resolution (HOME, fasting glucose/insulin). A simple, high resolution assessment of IR would enable earlier diagnosis and more accurate monitoring of intervention effects. This research is based on a previously validated model-based method for measuring insulin sensitivity (S I ) which is highly correlated to the gold-standard clamp. The prior work utilises both insulin and glucose measurements approximately every 5 minutes as well as Cpeptide measurements. The protocol is of short duration (<1 h), is a simple protocol, has low cost and high repeatability. This paper improves the accuracy of model-parameter identification for the insulin kinetics with little to no added computational cost. It also significantly reduces the number of insulin measurements required to identify S I , and shows that the removal of Cpeptide measurements has little effect on the accuracy of S I . Methods: Clinical data is used from a prior study which included 46 insulin sensitivity short IGVTT tests and with ethics approval granted by the Upper South A Regional Ethics committee. A full data set of glucose, insulin and Cpeptide measurements is first assumed, and a new extended iterative integral-based parameter identification is implemented to identify insulin kinetics. Once the insulin kinetics are obtained, insulin sensitivity is obtained using a previously derived one iteration integral method. The new iterative integral method uses an approximate analytical solution to the non-linear saturated kinetic equations and requires very minimal computation. This method, first assumes there is no saturation to obtain an approximate insulin curve. This approximate insulin curve is then substituted into the saturated part of the insulin kinetics differential equations, to enable an approximate solution to the full saturation equations. For the insulin kinetics analysis, the accuracy of the method is compared to the previous one iteration integral method. An analysis is then done, to find the minimal data set required for accurate insulin sensitivity measurement. This analysis includes removing Cpeptide measurements, followed by insulin measurements. All glucose measurements are kept. The baseline for comparison is the S I value from the full data set. Results: The new iterative integral method is found to converge very rapidly with very little more computational time required. The percentage model insulin response errors are typically 3-5% compared to 7-15% for the one iteration integral method. A similar iterative integral method applied to the glucose response was not used as simulation showed it only gave a small reduction in error (<2%). The removal of Cpeptide measurements had only a very small effect on S I (<1%). The use of one basal insulin measurement I basal , one insulin measurement 10-15 minutes after the bolus I bolus , and a near steady state Introduction: Hyperglycemia (high blood sugar levels) occurs in 40-80% of very low birth weight infants in the neonatal intensive care unit (NICU). This condition has been linked to mortality and morbidities including retinopathy of prematurity, osmotic diuresis, reduced immune system performance and sepsis. Insulin therapy can control hyperglycaemia and promote growth, but increases the risk of dangerous low levels of blood glucose. The goal of this model is to provide a vehicle for real-time blood glucose control by accurately capturing the dynamic effect of insulin to provide dosing recommendations for attending clinicians. The model is used for two tasks: predict future blood glucose concentration for real-time control, and perform simulated trials to optimise control strategies. Methods: The glucose regulatory system model is based upon a similar model employed successfully in adult intensive care, and modified to account for differing physiology in the neonate. Insulin sensitivity is the driving parameter in the model. Stochastic modelling and time-series analysis methods provide confidence bands for blood glucose predictions. Retrospective data is used to generate patient-specific, time-varying insulin sensitivity profiles via integral-based parameter identification methods. The profiles are used to generate "virtual patients", which are used to simulate patient responses to glucose and insulin inputs. Results: Retrospective data for 25 episodes of insulin usage representing over 3,500 hours of patient data was used to validate the model in simulation. Median absolute prediction errors for the 25 virtual patients at 1 and 2 hour intervals were 5.8% and 9.9% respectively. Stochastic modelling of the insulin sensitivity parameter averaged 59% of blood glucose predictions within the 0.98 mmol/L wide 25%-75% confidence range, and 91% of measurements within the 2.79 mmol/L wide 5%-95% confidence range. Simulations using basic controllers over the 25 "virtual patients" resulted in 87% of hourly simulated measurements within the target 4-7 mmol/L band, compared to 31% for retrospective hospital control. Mean blood glucose decreased 32% from 8.4 mmol/L to 5.7 mmol/L, and the standard deviation decreased 44% from 3.2 mmol/L to 1.8 mmol/L. Increased time in a target band and reduced standard deviation are robust measures of the tight control possible with model-based methods. Incorporating the stochastic model into simulated controllers resulted in a 17% decrease in simulated measurements below 4 mmol/L. Discussion: Insulin sensitivity in the neonate was found to be higher than similar adult model due to higher metabolic clearance of insulin clearance and higher rates of glucose turnover. Endogenous glucose production may not be suppressed by exogenous glucose infusions in the neonate, exacerbating hyperglycaemia. Additionally, limited glycogen stores and low concentrations of enzymes for gluconeogenesis mark the importance of optimising glucose uptake. Blood glucose control presents several challenges that are unique to this group of infants. Limited blood volume places restrictions on the frequency of blood glucose sampling achievable in practice. Virtual simulations allow testing of measurement frequency regimes in-silico and optimization during real-time control. Incorporating stochastic models provides extra assurance against hypoglycaemia. Conclusions: Hyperglycaemia affects a large proportion of premature infants, and has been linked to worsened outcomes. A model that accurately captures the dynamics of neonatal metabolism can provide a vehicle for real-time blood glucose control and a platform to develop high-performance control algorithms in simulation. Reduced hyperglycaemia and tighter control in simulated results highlight the possibility for tight glucose control for improved outcomes in this fragile neonatal cohort. Clinical trials of this model are in progress. Justin Boyle 1 , Melanie Jessup 2 , Julia Crilly 3 , James Lind 3 , Marianne Wallis 2,3 , Peter Miller 4 and Gerry Fitzgerald 5 1 Australian E-Health Research Centre, CSIRO ICT Centre, 2 Griffith University, 3 Gold Coast Hospital, Queensland Health, 4 Toowoomba Hospital, Queensland Health 5 Queensland University of Technology Introduction: We have forecasted patient admissions to a public hospital Emergency Department (ED) with the expectation that this knowledge can improve bed management and elective surgery practices. Methods: Five years of patient admission data (Jul'02 -Jun'07) were obtained from a Queensland public hospital ED (average daily presentations -150, average daily admissions -50) and aggregated into monthly, daily and hourly intervals. The dataset was partitioned into a test set used to estimate the models and a 12-month hold-out set used for validation. Daily admission forecasts were made using four years of estimation data and compared to that using the most recent two years of estimation data. Regression models were developed using calendar variables (date, month-of-year and day-of-week) and a variable to indicate public holidays and the surrounding days. Exponential smoothing and Autoregressive Integrated Moving Average methods were also investigated. Forecast accuracy was assessed by computing the Mean Absolute Percentage Error (MAPE) and the number of predictions that fell outside the 95% prediction interval. Results: Forecast accuracy of several techniques is shown in Table 1 and Figure 1 . Simple Seasonal: 2-years 11.0% 3.6% (13 instances) ARIMA (0,0,0)(1,0,1) : 2-years 11.1% 3.6% (13 instances) Analysis of the data prior to forecasting indicates that although presentations to the ED have increased over the analysis period, bed admissions have shown a plateau effect, possibly due to bed capacity being reached. It is for this reason that forecasts made using a recent two year period are more accurate than that based over four years ( Figure 1 ). Jan03 Jan04 Jan05 Jan06 Jan07 Jan08 20 Forecasting performance was comparable among the different methods with the lowest error arising from regression based on 2-years of preceding data. The daily admission data described in this study is noisy and more difficult to predict than monthly data. If admissions are aggregated to a monthly level, we are able to achieve MAPE figures of 1.8% and similar low forecasting errors have been reported by others modelling monthly data 1 . However, predictions at a daily resolution are expected to be more useful for bed managers. A recent study based on daily data 2 reports similar MAPE figures but does not include prediction intervals, which are considered essential components to forecasting. We are continuing efforts to reduce the errors and width of prediction intervals from additional modelling techniques. Introduction: This paper discusses the initial result of an ongoing research on designing and developing a prediction model for predicting the spread of cancer. Cancer (malignant tumor) occurs when the cell dividing mechanism which controls the growth of cell in human body is damaged. Predicting the spread of cancer cells is becoming critically important in the decision-making because by identifying the potential location that the cancer will spread, we can assist medical practitioners to plan an early and suitable treatment for the identified cancer cells. It is believe that the rapid damage of the healthy tissues will occur during treatment if the potential regions are not detected earlier. The idea of this research was initiated due to the lack of traditional method in defining the target volume (the size of potential region that the cancer will spread) which is dependent on doctor's experience. Therefore, this study is trying to propose a better approach in determining an accurate target volume, which is more standard and not dependent on any uncertainty factors and will be focusing on the prediction of local spread. The methodology of this research can be divided into two main phases; features extraction and model development. Image processing will be involved in the first phase where IP is applied to extract features from cancer images. These extracted features will be used in the next phase to develop the prediction model. Complete steps involved in this study is illustrated in Figure 1 . Results: This paper will discuss the initial result of the study which is the outcome of the systematic review. It will cover the techniques and potential attributes to be used in the model development. The basic idea is illustrated in Figure 2 . Conclusions: Findings of this study are not means to replace human expert, but an approach towards improving the cancer spread prediction and can be used as a decision support tool in cancer management. This research will attempt to develop a way of accurately predicting the spread of cancer and leading to hopes of life-saving improvements in surgery or any cancer treatment. Therapeutic Goods Administration, Canberra, Australia All too often innovators come up with a great idea, which goes on to become a great product, ready for market until they find there are regulatory requirements that need to be met first !! Regulation is not about slowing down the path to market, but about ensuring the quality, the safety and the efficacy of medical devices introduced to the clinical environment and used on or by our patients. Consideration of regulatory requirements needs to begin at about the same time as that great idea emerges from the depths of your conscience. It requires discipline and documentation in the development and trial phase, and controls over production of the product that emerges. It involves active follow up of the performance of the product in the market, and using the information gleaned from that activity in a process of continuous improvement. Over the last fifteen years or so, a global model for regulation has been developed by a Group known as the Global Harmonisation Task Force -a group of representatives both from international regulatory agencies and the regulated industry. This model is already in place in a number of the developed economies of the world, and is rapidly being picked up in the developing regions such as South East Asia, South America, the Middle East and others. Although differing marginally in implementation between the regions, the model has proved robust and is well accepted. Understanding the key elements of the framework will assist a developer and certainly make for an easier path when it comes time to enter the regulated market. Mike Flood is one of the senior staff in the Office of Devices, Blood & Tissues within Australia's Therapeutic Goods Administration, but for this presentation he will be wearing a GHTF hat as a member of Study Group 1 of the organisation, and will outlining the core elements of this regulatory model and how it is being adopted at the global level. James Talbot Introduction: With the increased precision in dose delivery provided by highly conformal techniques such as intensity modulated radiotherapy (IMRT), the requirement for accuracy in patient positioning for treatment is emphasised. Traditionally, the position of the patient during planning is replicated for treatment through the use of room lasers and portal imaging. Recently, cone-beam CT integrated linacs have become commonplace for IMRT. A significant shortcoming involved with these methods is that they do not take setup deformations to the patient into account. Methods: This system allows the radiation therapist to visually guide the patient during positioning for treatment using augmented reality (AR). The program utilises AR by superimposing a 3D surface contour of the patient (obtained from the planning CT) over a real-time image during treatment to assist the RT with position guidance. The scale and position of the 3D contour are chosen to precisely replicate the planning position. Not only does this provide setup verification without additional ionizing radiation exposure to the patient, but it also offers constant patient monitoring throughout the whole treatment process. Results: In trials of the AR system with a 30 cm wooden phantom, deformations to its position have been easily apparent (figure 1) and corrections can be administered with accuracy on the order of millimetres. Although the system has yet to be tested in a clinical environment, it is expected that it will operate with similar accuracy. Discussion: A focus on simplicity has been prominent in the development of this system. The user interface has intentionally been made straightforward, requiring little user input. It is also easy and inexpensive to set up, and will take up little space in a linac room. Simplicity has been emphasised because the system is not intended as an alternative to current methods of position guidance, but as a supplement that the radiation therapist can refer to for verification. If it had been developed in a way that made it overly complex, its value would be significantly undermined and its use may not be worth the effort. The tests with the phantom have so far provided pleasing results, and its clinical value will be properly evaluated when test data is obtained from real patients, where visual position verification is likely to be influenced by factors such as patient motion (for example, respiration) and changes to the patient contour between fractions. The accuracy of position registration on a larger scale subject (ie a patient) will also be investigated and its performance will be characterised in a range of clinical situations. radiation therapy treatment planning calculations. By assigning a bulk electron density to the MR images, the need for CT images could be removed. Methods: Treatment plans were created using the CT scans of 10 patients. Bulk electron density plans were created on the CT images, with water-equivalent density applied to the whole pelvis and a separate density applied to bone. The effect on the normalisation point dose was compared to a full density CT plan. The effect of including or excluding bone from the bulk density plan was also examined, and effective depth calculations were used to calculate the optimal average density for bone in these plans. The bulk electron density plans that included a separate density for bone resulted in treatment similar to the full density CT plans. The optimal average density for bone was found to be 1.28 g/cm 3 for full bone, and 1.48 g/cm 3 for just outer bone. These effective depth calculations occasionally displayed asymmetrical effective densities for bone. When the full bone was assigned the appropriate density, the total dose was on average 198.9 cGy (0.6 % lower) compared to the intended 200 cGy using the full density plan, with a standard deviation of 1.3 cGy (0.7%). When only the outer bone was assigned a density, the total dose was on average 200.6 cGy (0.3 % higher), with a standard deviation of 1.7 cGy (0.9%). Completely homogenous density plans displayed a larger deviation from the full density CT plans, only delivering on average 194.9 cGy (2.6 % lower), with a standard deviation of 1.5 cGy (0.8%). Discussion: The bulk electron density plans including bone were superior to the fully homogenous bulk electron density plans when compared to the full density CT plan, in agreement with the findings of Lee et al. (2003) . Future work with a further 20 patients will examine the possibility of using the technique with MR images. The use of bulk electron density radiation therapy is feasible. The bone should be included and assigned an appropriate density for optimum accuracy. Acknowledgements: This work was partially funded by Cancer Council NSW Grant Number RG 07-09. T. Kairn 1 and A. L. Fielding 1 1 Introduction: The accurate identification of tissue electron densities is of great importance for Monte Carlo (MC) dose calculations. When converting patient CT data into a voxelised format suitable for MC simulations, however, it is common to simplify the assignment of electron densities so that the complex tissues existing in the human body are categorized into a few basic types. This study examines the effects that the assignment of tissue types and the calculation of densities can have on the results of MC simulations, for the particular case of a Siemen's Sensation 4 CT scanner located in a radiotherapy centre where QA measurements are routinely made using 11 tissue types (plus air). Methods: DOSXYZnrc phantoms are generated from CT data, using the CTCREATE user code, with the relationship between Hounsfield units (HU) and density determined via linear interpolation between a series of specified points on the 'CT-density ramp' (see Figure 1 (a)). Tissue types are assigned according to HU ranges. Each voxel in the DOSXYZnrc phantom therefore has an electron density (electrons/cm 3 ) defined by the product of the mass density (from the HU conversion) and the intrinsic electron density (electrons /gram) (from the material assignment), in that voxel. In this study, we consider the problems of density conversion and material identification separately: the CT-density ramp is simplified by decreasing the number of points which define it from 12 down to 8, 3 and 2; and the material-type-assignment is varied by defining the materials which comprise our test phantom (a Supertech head) as two tissues and bone, two plastics and bone, water only and (as an extreme case) lead only. The effect of these parameters on radiological thickness maps derived from simulated portal images is investigated. Results & Discussion: Increasing the degree of simplification of the CT-density ramp results in an increasing effect on the resulting radiological thickness calculated for the Supertech head phantom. For instance, defining the CT-density ramp using 8 points, instead of 12, results in a maximum radiological thickness change of 0.2 cm, whereas defining the CT-density ramp using only 2 points results in a maximum radiological thickness change of 11.2 cm. Changing the definition of the materials comprising the phantom between water and plastic and tissue results in millimetre-scale changes to the resulting radiological thickness. When the entire phantom is defined as lead, this alteration changes the calculated radiological thickness by a maximum of 9.7 cm. Evidently, the simplification of the CT-density ramp has a greater effect on the resulting radiological thickness map than does the alteration of the assignment of tissue types. Conclusions: It is possible to alter the definitions of the tissue types comprising the phantom (or patient) without substantially altering the results of simulated portal images. However, these images are very sensitive to the accurate identification of the HU-density relationship. When converting data from a patient's CT into a MC simulation phantom, therefore, all possible care should be taken to accurately reproduce the conversion between HU and mass density, for the specific CT scanner used. Acknowledgements: This work is funded by the NHMRC, through a project grant, and supported by the Queensland University of Technology (QUT) and the Royal Brisbane and Women's Hospital (RBWH), Brisbane, Australia. The authors are grateful to the staff of the RBWH, especially Darren Cassidy, for assistance in obtaining the phantom CT data used in this study. The authors also wish to thank Cathy Hargrave, of QUT, for assistance in formatting the CT data, using the Pinnacle TPS. Computational resources and services used in this work were provided by the HPC and Research Support Group, QUT, Brisbane, Australia. Introduction: Palliative radiotherapy calls are frequently required for emergency treatment of lesions for pain relief. Typically, the patient is planned and treated out of normal working hours by the on-call Radiation Therapist(s) who would need to CT scan the patient, plan the treatment, and then deliver the treatment. Often there is very few staff available to assist with lifting the patient who is likely to be bedridden. The advent of ConeBeam CT (CBCT) capabilities on the linac enables the imaging to be acquired on the same treatment couch as the treatment delivery with the patient in the same position. The shortcoming has been the accuracy of calculations performed on the CBCT image. In this study we evaluate the use of kV CBCT images for dose calculation with Eclipse to investigate whether this is a feasible procedure for this cohort of patients. A Varian 21EX with OBI v1.3 and Trilogy with OBI v1.4 (Advanced Imaging) has been used to acquire CBCT images. An anthropomorphic chest phantom was initially used to investigate the image quality differences between the CBCT images and a planning CT image from a GE Lightspeed CT scanner. The dose was calculated by the Varian Eclipse using CBCT and CT image sets for a chest phantom with and without inhomogeneity correction and using both Eclipse PBC and AAA algorithms. The dose for the CBCT images were calculated with HU calibration curves obtained for both our planning CT scanner and the CBCTs. The GE Lightspeed CT scanner was taken as the gold standard for dose calculation with inhomogeneity correction. For the phantom case, the difference in monitor units between the gold standard and that calculated on the CBCT image was less than 5% when using an inhomogeneity correction. When the inhomogeneity correction is not used, a similar value of monitor units was calculated between the two image sets. The image quality of the v1.4 CBCT image sets was higher than that of the earlier version, however it is less quantifiable due to auto-filtering of the images used to correlate with anatomy. The image quality improvements in the software version leads to more accurate body contours and a HU calibration curve that is much closer to that of our gold standard. Use of CBCT for planning and calculating dose for emergency palliative radiotherapy is a viable option based on convenience to patient and staff, the accuracy of the dose calculation and the time taken for the whole procedure. HIT Lab NZ, University of Canterbury, Christchurch, New Zealand Video-conferencing tools are slowly becoming part of the work practices for meetings, discussions and remote collaborations in the scientific field. As the technology develops (high speed network, high resolution video, low latency), a large number of fundamental issues still remain in this area of computer-mediated: zero-sum gaze, separation between user embodiment and view of the task, floor control for multimedia data, etc. In this presentation, we will overview some of these issues, the future of computer-mediated collaboration (i.e. embedding video-conferencing tools and collaborative virtual environment) and our work in the area. Introduction: Radiotherapy is an important modality for treatment of cancer. A typical treatment program consists of daily sessions on a linear accelerator over a period of several weeks. To ensure the treatment is delivered as planned, the patient's treatment position has to be identically reproduced every time. Given the flexibility of the human body, this is no trivial task. This research aims to assist the radiation therapist (RT) in the daily patient set-up procedure using augmented reality. Methods: Treatment planning and delivery rely on 3D patient models based on computed tomography (CT) data sets of the patient. CT images represent a "snapshot" of the patient's anatomy at the time of the image acquisition. The main idea of this project is to extract the outer contour of the patient from the CT slices, and to augment the live image of the patient on the linear accelerator couch with the 3D patient contour. This enables the RT to visualise both the current patient position as well as the original one, which is to be reproduced. One of the main difficulties of the approach is to register the original 3D patient contour with the origin of the linear accelerator coordinate system. This point is known as the isocentre, which is where the radiation beams intersect as the linear accelerator gantry revolves around the patient. Since this is a virtual point in space, AR tracking markers cannot be attached directly to it. Moreover, there are no points on the linear accelerator that remain stationary with respect to the isocentre. As a solution to this, a small cube was constructed with tracking markers attached to the faces (figure 1). This cube can be aligned with lasers in the linear accelerator room (which intersect at the isocentre) and the coordinates of the AR system are set to match those of the cube. The system is calibrated this way and the cube is removed. Results and discussion: Using a small scale set-up in a lab environment with an ordinary web camera and three orthogonal laser beams, the position of the isocentre could be established with sub-millimetre accuracy. A small wooden statue with deformable limbs was used to investigate the system in terms of detecting both rigid and non-rigid position deviations. Small variations on the order of a millimetre could easily be visually detected. Further work includes testing the system in a clinical environment using the room surveillance cameras and real patients. The role of the sympathetic nervous system in the chronic regulation of blood pressure is a contentious issue and currently the subject of considerable research effort. One method of investigating its function is to measure the sympathetic nervous system's output directly by exposing nerve bundles and recording action potentials. The level of sympathetic nerve activity (SNA) can then be compared with simultaneously acquired blood pressure measurements; allowing researchers to experimentally investigate their interaction. This paper describes an implantable telemeter designed to fulfill this niche. Methods: The telemeter measures blood pressure and SNA in vivo and transmits the recordings wirelessly to a base station. Blood pressure measurements are made using a fluid filled catheter (600ȝm OD) connected to a silicon pressure sensor which is housed with in the telemeter. SNA is recorded using two 100 ȝm diameter stainless steel electrodes coiled around the renal sympathetic nerve and sealed in place using silicon adhesive. Blood pressure and SNA recordings are digitized using a 12 bit analogue to digital converter and transmitted on the 2.4 GHz ISM band. A remote receiver decodes the transmissions and converts them to analogue signals suitable for data acquisition. The telemeter is encapsulated in medical grade silicon elastomer and weighs 15 g, making it small enough for use in rats. The telemeter is powered using inductive power transfer technology whereby energy is supplied using high frequency magnetic fields. This allows the telemeter to operate indefinitely without percutaneous leads and the associated infection risk. Results: Figure 2 shows an example recording of blood pressure (from the descending aorta) and renal SNA from an unconstrained conscious Wistar rat. The recording shows the hallmark traits of SNA, with bursts of activity occurring synchronously with the cardiac cycle. BP recordings are of high fidelity which is due to the manometers wide bandwidth ( >200 Hz). The telemeter is of a similar size to existing devices but posses many advantages such as inductive charging, digital transmission, and a high bandwidth microvolt input range bio-potential amplifier. Future work will concentrate on miniaturization and ascertaining the long stability of the blood pressure measurement system. Introduction: Lymphoedema is a chronic disorder in which fluid, protein and fibre accumulate in the tissues due to the destruction of the lymphatic system. Over time the affected limb becomes abnormally larger, harder and gradually increases its resistance to indentation upon pressure. Appropriate treatment requires accurate diagnosis of the pathophysiologic changes. These changes can be monitored by the use of tonometry. Traditionally tonometry measures the resistance of tissues to compression, providing a measure of tissue hardness. However the traditional mechanical tonometer is often considered to be misleading, arbitrary and susceptible to the user's technique [1] . In addition the literature describes the tissue to have viscoelastic properties. These may be useful in monitoring Lymphoedema, which the mechanical tonometer is incapable of measuring. Methods: A viscoelastic tonometer was designed to measure the viscoelastic properties of the lymphoedema tissue. The tonometer is software controlled to indent the tissue at a fixed rate until 200g is applied to the tissue. The indentation depth is held for 10seconds and then retracted. During this test cycle the displacement into and the force of the tissue due to indentation is transmitted wirelessly to a computer and displayed and recorded for analysis. Results: The tonometer has successfully measured and recorded the displacement and force during the test cycle. Tests have been performed on simulated lymphoedema tissue represented by viscoelastic foams. The foams vary in hardness and viscosity to represent the various stages of lymphoedema. The results show that the tonometer is capable of determining viscoelastic properties of materials. The recorded data was analysed to determine the hardness and viscoelastic nature of the foams, allowing the gross characteristics of the foams to be resolved. The viscoelastic tonometer is an improvement on the traditional tonometer. It is capable of determining the viscoelastic nature of the foams and will be able to distinguish the viscoelastic nature of tissue, providing relevant and accurate information about the condition. Viscoelastic data can be used to properly understand the current state of the tissue. It will allow the clinician to measure changes in the condition, the efficacy of treatments and could be used at home for self monitoring. Introduction: A clinical examination of the abdomen is performed as part of a routine physical examination, when a patient presents with abdominal pain or a history that suggests a possible abdominal pathology, or when there is suspected internal trauma. A clinician palpates the regions of the abdomen to feel for tenderness, guarding and underlying abnormalities. This involves both observing the patient's response and feeling the response of their abdomen to various applied pressures in the different regions. We aim to develop a simulator that can teach and train abdominal palpation examination skills by determining the applied force and position of an examiner's hand on a patient's abdomen. In order to incorporate the training of appropriate palpation forces into the design of the simulator, it was necessary to know the location and magnitude of typical forces applied to the abdomen during palpation. A search of the literature found only initial studies 1,2 on a limited number of test cases recording forces applied to the abdomen. Methods: Ethics approval was obtained to perform a pilot study in the South Australian Movement Analysis Centre to investigate the palpation examination forces applied to a subject's abdomen. For each trial, reflective markers were attached to the subject's abdomen and the clinician's dominant hand to track their motion with the infrared cameras in the VICON three dimensional motion analysis system while AMTI force plates recorded the force. The study showed that individual clinicians were very systematic in surveying the abdomen, with varying methods between clinicians. There is consistency in applied light palpation and deep palpation forces within individual clinicians with more variation apparent between clinicians. Deep palpation produced greater forces than light palpation as expected, and the duration of the clinicians' applied deep palpation was 2-3 times longer than light palpation. The palpation information recorded will be incorporated into the development of the simulator. Conclusions: Initial findings from the investigation into applied abdominal palpation forces show individual clinicians have consistent palpation technique with two distinct ranges in which light and deep palpation lie. These findings have enabled further development of the abdominal palpation training simulator. Anaesthesia has seen a significant leap in the possibilities of what is available to help improve patient care and drive efficiencies/cost savings since the introduction of Electronic Anaesthesia Delivery systems over the last 10 years. This presentation will explore some current technological and workflow possibilities being integrated into currently available systems as well as what may be in store in the near future. Over the last 50 years, since the discovery of the structure of DNA, our understanding of the molecular basis of life has undergone an extraordinary revolution and we now have a good understanding of how many proteins work and contribute to biophysical processes in cells. Biomedical science has, however, reached a stage where it is rich with molecular detail but rather poor in terms of understanding how these molecular events contribute to the physiological processes that characterise life at higher scales -tissues, organs and organ systems. The reason for this is partly that physiological processes are just so complex -most processes at the cell level are highly redundant and few diseases, for example, are associated with the misbehaviour of single genes. The new discipline of 'systems biology' is helping to improve our understanding of complex biochemical systems at the cell level. But there is an urgent need for greater input from engineers, physicists and mathematicians, who can use the tools of mathematical modelling, based on well understood physical principles, to integrate the molecular detail back into our understanding of intact living systems at the tissue and organ levels. This talk will present an overview of the 'Physiome Project', which is using this new computational physiology approach, and in particular highlight the work on structure-function relations in the heart, lungs and other organs by the Auckland Bioengineering Institute. It is often said that the best thing about standards is there are so many of them -given the current flurry of activity in organizations across the globe developing standards and standards-based integration profiles, one has to ask: "So when will I be able to buy products that support all these great standards and will they be any better than what I can get now?!" For those who have been listening to the hype for … decades … credibility ebbed long ago, especially when standards lag current technology offerings by many years and regulated medical devices typically add a few more years on top of that. This presentation provides an overview of international efforts working to define standards and interoperability profiles that promise to deliver medical device interoperability, both technical and semantic, from the sensor to the EHR, from the bedroom to the OR, and from Cambridge to … Christchurch. The current and future activities of the Integrating the Healthcare Enterprise (IHE) Patient Care Device (PCD) domain are described with an eye toward how they will support systems that lead to true multi-vendor muti-vendor heterogeneity, semantic comparability, and real-time data availability. This will include topics such as device-to-enterprise communication, alarm communication management, device terminology "Rosetta" mapping, point-of-care infusion 5-rights verification, device point-of-care integration, and medical equipment management. Also covered is the rapid convergence of Clinical Engineering and healthcare I.T., which has resulted in the creation of organizations such as the CE-IT Alliance between AAMI (… not talking car insurance!), ACCE, and HIMSS. These rapidly evolving information systems have also resulted in the updated regulatory definition of medical devices to include "standalone software", only to beg the question about when these applications are actually classified as devices vs. healthcare applications. This regulatory confusion has resulted in the issuance of the U.S. FDA's draft guidance on Medical Device Data Systems (MDDS) and controversy around standards such as the draft ISO 29321 that defines a risk management process for "health software" that does not fall under the purview of regulatory agencies but can still have serious adverse affects on patient safety. These highly integrated systems have also begged the question regarding the regulatory approval process and whether a new paradigm leveraging assurance / safety cases should be used in addition to the current quality-system based approach. This is particularly driven by work from the CIMIT Medical Device Plug-and-Play (MDPnP) program and the emerging Integrated Clinical Environment (ICE) standards that define dynamically configurable networks that support closed-loop control, smart alarms, safety interlocks, and clinical algorithms, all in a heterogeneous network that is safe and reliable. Finally, these systems of systems are presenting unique integration and management issues that require the application of life-cycle based risk management. IEC 80001 is being developed by ISO/IEC Joint Working Group 7 to define a risk management process for networks that incorporate medical devices. It includes organizational roles and responsibilities, a risk management process, and the work products that are developed. The application of IEC 80001 potentially represents a sea change in how health care providers manage their medical equipment and networking systems, as well as their relationships with technology providers and integrators. This talk discusses some of the issues involved in commercial product launches, and how approaching this correctly can make the difference between success and failure. Polartechnics is a public company listed on the ASX specialising in the research, development, manufacture and marketing of devices for the screening of fatal preventable cancers. Eighteen months ago, Polartechnics had a market ready product, but no market. By adopting the strategy of specifically targeting Asia, Polartechnics has now accessed markets in 17 countries, a potential market of 1.1 billion users. Find out how Potartechnics has achieved such a remarkable launch success by understanding the local cultures and by walking the walk, not talking the talk. Introduction: Over the last century, medical imaging has gained an increasingly significant and integral role in clinical medicine. At the outset of the 21st century, the field has been witnessing a number of gradual but profound transitions. This lecture delineates 10 major movements that characterize these transitions, and provide examples reflective of the changes. The undercurrent changes in medical imaging can be characterized in 10 distinct movements. Those include general shifts from two-dimensional to three-dimensional imaging, from developmental work to optimization of existing equipment, and from a qualitative approach towards imaging to an increasingly quantitative approach. Further changes involve moving from application-generic to application-specific examinations, from anatomical and structural imaging to functional and molecular imaging, and from patient-generic to patient-specific protocols. While there is a significant push to move from research in laboratories to actual clinical practice, there is also a similar movement to test human applications on animal models. Finally, the field is experiencing a movement from an informationpoor situation to an information-rich opportunity and ultimately from an anecdotal practice of medicine to an evidence-based one. Conclusions: These movements largely inform the current tendencies in medical imaging research and practice. Medical physics will remain relevant and influential to the extent that it would lead and contribute to these movements. Methods: Several user-specifiable parameters influencing the character of the simulated beam relate to machine-specific parameters which are not directly measurable. Examples include the incident energy, energy distribution, intensity distribution, and spot size of the primary electron beam on the target. These parameters must be inferred from comparisons with measurements known to be sensitive to them. Calculated photon beam phase space descriptions must also be validated against measured data. We have surveyed the literature and compiled a concise procedure based upon our experience of commissioning a MC model. A procedure for development of a Monte Carlo linear accelerator model is outlined along with some of the important issues to be considered at each phase, including: x acquisition of component geometry and materials specifications x measurement of relevant reference data sets which are readily comparable to simulation output for machine parameter determination and phase space description validation x encoding and executing the simulation with appropriate approximations and choices of variance reduction parameters x the iterative process of tuning electron beam parameters and matching to sensitive reference measurements including a summary of published sensitivity analyses x validate output by comparison of calculated dose distributions to measured data e.g. water tank depth dose and profiles. Conclusions: We discuss the development and commissioning of a Monte Carlo model which is being developed for detailed studies of small treatment fields shaped by a micro-multileaf collimator. A representative set of measurements which should be conducted for validation is suggested and a procedure described for the refinement of a model to be commissioned for the purposes of beam characterisation and dose calculations. Introduction: Recent advances in the planning and delivery of radiotherapy treatments have resulted in improvements in the accuracy and precision with which therapeutic radiation can be administered. As the complexity of the treatments increases it becomes more difficult to predict the dose distribution in the patient accurately. Monte Carlo (MC) methods have the potential to improve the accuracy of the dose calculations and are increasingly being recognised as the 'gold standard' for predicting dose deposition in the patient [1] . This project has three main aims: 1. To develop tools that enable the transfer of treatment plan information from the treatment planning system (TPS) to a MC dose calculation engine. 2. To develop tools for comparing the 3D dose distributions calculated by the TPS and the MC dose engine. 3. To investigate the radiobiological significance of any errors between the TPS patient dose distribution and the MC dose distribution in terms of Tumour Control Probability (TCP) and Normal Tissue Complication Probabilities (NTCP). The work presented here addresses the first two aims. Methods: (1a) Plan Importing: A database of commissioned accelerator models (Elekta Precise and Varian 2100CD) has been developed for treatment simulations in the MC system (EGSnrc/BEAMnrc). Beam descriptions can be exported from the TPS using the widespread DICOM framework, and the resultant files are parsed with the assistance of a software library (PixelMed Java DICOM Toolkit). The information in these files (such as the monitor units, the jaw positions and gantry orientation) is used to construct a plan-specific accelerator model which allows an accurate simulation of the patient treatment field. (1b) Dose Simulation: The calculation of a dose distribution requires patient CT images which are prepared for the MC simulation using a tool (CTCREATE) packaged with the system. Beam simulation results are converted to absolute dose per-MU using calibration factors recorded during the commissioning process and treatment simulation. These distributions are combined according to the MU meter settings stored in the exported plan to produce an accurate description of the prescribed dose to the patient. (2) Dose Comparison: TPS dose calculations can be obtained using either a DICOM export or by direct retrieval of binary dose files from the file system. Dose difference, gamma evaluation and normalised dose difference algorithms [2] were employed for the comparison of the TPS dose distribution and the MC dose distribution. These implementations are spatial resolution independent and able to interpolate for comparisons. Results and Discussion: The tools successfully produced Monte Carlo input files for a variety of plans exported from the Eclipse (Varian Medical Systems) and Pinnacle (Philips Medical Systems) planning systems: ranging in complexity from a single uniform square field to a five-field step and shoot IMRT treatment. The simulation of collimated beams has been verified geometrically, and validation of dose distributions in a simple body phantom (QUASAR) will follow. The developed dose comparison algorithms have also been tested with controlled dose distribution changes. Conclusion: The capability of the developed code to independently process treatment plans has been demonstrated. A number of limitations exist: only static fields are currently supported (dynamic wedges and dynamic IMRT will require further development), and the process has not been tested for planning systems other than Eclipse and Pinnacle. The tools will be used to independently assess the accuracy of the current treatment planning system dose calculation algorithms for complex treatment deliveries such as IMRT in treatment sites where patient inhomogeneities are expected to be significant. phantom. CT data of the thorax phantom was acquired to investigate doses in the target for all of these 3 energies. Usually, lung density is patient specific and may vary from patient to patient. The effect of lung density on the dose distribution was investigated for 4 different densities ranging from 0.12 to 0.30 g/cc. Dose volume histograms (DVHs) and a dose uniformity index (mean dose/minimum dose) were used to assess the changes. In all cases a phase space file of the LINAC with the 4cm cylindrical collimator was used to model the beam for the clinical setup. Results: DVH results demonstrated that the dose coverage of the GTV, PTV1 and PTV2 became worse with increasing beam energy and decreasing lung density. The dose uniformity of the GTV and PTV1 decreased exponentially with increasing energy with values for GTV of 0.98, 0.95, 0.93 and 0.91 for energies of 4, 6, 10 and 18MV respectively. However the dose uniformity in PTV2 increased with increasing energy. The dose uniformity in the GTV and PTV1 for different lung densities increased almost exponentially with increasing density and exponentially decreased for PTV2. Discussion and Conclusion: Monte Carlo modelling of beams used in stereotactic radiotherapy agreed well with measured data. Increasing beam energy to improve the dose uniformity of a PTV that includes a large volume of normal lung tissue actually results in poorer dose uniformity in the GTV. Therefore in a clinical setting it is better to create a plan using lower beam energies for a good coverage of dose to the target. The modelling with different lung densities suggests that there may be possible advantages to gating treatments to use a phase where the lung density is greatest. Ongoing work will use the Monte Carlo data to investigate the effect of energy on lateral electronic disequilibrium and dose conformity for small lung tumours. Introduction: The MammoSite ® balloon is placed inside the breast surgical cavity and inflated with a combination of saline and radiographic contrast medium. The contrast medium contains elements with high atomic numbers, like iodine, and therefore the balloon mixture can not be considered tissue or water equivalent. At present, most brachytherapy treatment planning systems do not account for the inhomogeneities in dose calculations (Nath et al. 1995 Introduction: Intensity Modulated Radiation Therapy (IMRT) and Stereotactic Radiotherapy (SRT) use small radiation field sizes. For these fields an ideal detector would be a tissue equivalent material with good spatial resolution and high sensitivity, whilst being independent of energy, temperature, dose rate and orientation. Radiation dosimetry based on synthetic diamonds made through a Chemical Vapour Deposition (CVD) process, has initiated a growing interest in these detectors due to the availability of the relatively inexpensive synthetic diamond materials. Methods: We are developing a CVD diamond detector suitable for use in small radiation fields, with direct application in IMRT and SRT. Preliminary research includes examination of the optimal physical properties of the detector such as size, shape, doping impurity in the CVD diamond, electrode materials, and detector orientations by using EGSnrc Monte Carlo (MC) simulation. Simulation parameters for modelling the synthetic diamond detector were investigated for a carbon thickness of 100Pm and electrode thickness of 0.1Pm using DOSXYZnrc code, an accompanying MC code for EGSnrc. DOSXYZnrc can simulate variable voxel sizes in a water phantom to determine the minimum voxel dimensions that could be modelled with low statistical uncertainty. Simulations have been performed for a 1 Pm voxel inserted at a 10 cm depth in a 30u30u30 cm 3 homogeneous water phantom (modelled from 1u1u0.1cm 3 water voxels) in a 10u10 cm 2 6 MV photon beam perpendicular to the phantom in the (X, Y) plane. Dose absorption in the 1 Pm voxel insertion was analysed by varying the following MC parameters: cross-section data, Boundary Crossing Algorithm (BCA), and HOWFARLESS option. The insertion of a 1 Pm thick voxel in the water phantom at a 10 cm depth produced the largest absorbed dose uncertainty (±6.5% 1 st.dev) when 700 ICRU cross-section data, PRESTA-I BCA, and HOWFARLESS option turned "on", were used - Figure 1 (a). The deviation from the expected dose was studied by using different combinations of the MC simulation parameters as shown in Figure 1 (a) and (b). The minimum deviation was found for 521 ICRU cross-section data, EXACT BCA, and HOWFARLESS option turned "off". With these parameters the deviation was further minimised by increasing the water voxel size simultaneously in both X and Y directions - Figure 1 (c). (a) (b) (c) Figure 1 . Absorbed dose deviations at a 10 cm depth in the water phantom for simulations with combinations of BCA set to EXACT ("exa") or PRESTA-I ("pre") and HOWFARLESS option turned "off" or "on", using 700 ICRU cross-section data (a) and 521 ICRU cross-section data (b). The effect of water voxel size is shown in (c), where the parameters used are 521 ICRU cross-section data, EXACT BCA, and HOWFARLESS option turned "off". For all simulations shown, ECUT was set to 0.7MeV, and PCUT to 0.01MV. Monte Carlo parameters and water phantom geometry have been successfully optimized in preparation for precise simulation of the synthetic diamond detector. Further work will focus on more complex geometries closer to the actual detector shape, and detector structure development. Medical marketing traditionally poses many challenges and none more daunting than those faced by companies whose small domestic markets require going global early on in the product life cycle. By understanding the way in which customers and the referral path recommend and adopt technology, companies can accelerate growth in new markets and use their speed to market as a competitive strength. No longer is just having the right value proposition enough, in a population of more than 2 billion people, marketers in Asia Pacific are having to compete with emerging low cost competitors and increasing demand for brands which reflect Asian modernity. Leveraging models used in the high technology and IT industries, growth for medical technologies can be maximised by building brand and providing relevant reference points for new segments. Traditional introduction of technologies will be first adopted by those in the cycle defined as visionaries and early adopters, but for accelerated growth penetration into the market majority is fundamental for sustained growth. For this, companies need to be prepared to champion product simplification, invest in professional and consumer advocacy, and adopt compelling Asian brand attributes. This seminar will examine the forms of Intellectual Property (IP) available for medical device companies and some of the issues involved in seeking protection. Initially an overview of the different types of IP, namely patents, trade marks, registered designs, and copyright will be provided, explaining how these different types of IP interrelate. Following this the criteria and procedure for obtaining patent protection will be discussed in more detail, allowing attendees to consider whether patent protection is appropriate for their own products, and what acts would need to be taken to protect their position. Finally, there will be a brief overview of common mistakes made when handling IP in the medical device industry Therapeutic Goods Administration, Canberra, Australia All too often innovators come up with a great idea, which goes on to become a great product, ready for market until they find there are regulatory requirements that need to be met first !! Regulation is not about slowing down the path to market, but about ensuring the quality, the safety and the efficacy of medical devices introduced to the clinical environment and used on or by our patients. Consideration of regulatory requirements needs to begin at about the same time as that great idea emerges from the depths of your conscience. It requires discipline and documentation in the development and trial phase, and controls over production of the product that emerges. It involves active follow up of the performance of the product in the market, and using the information gleaned from that activity in a process of continuous improvement. Over the last fifteen years or so, a global model for regulation has been developed by a Group known as the Global Harmonisation Task Force -a group of representatives both from international regulatory agencies and the regulated industry. This model is already in place in a number of the developed economies of the world, and is rapidly being picked up in the developing regions such as South East Asia, South America, the Middle East and others. Although differing marginally in implementation between the regions, the model has proved robust and is well accepted. Understanding the key elements of the framework will assist a developer and certainly make for an easier path when it comes time to enter the regulated market. Mike Flood is one of the senior staff in the Office of Devices, Blood & Tissues within Australia's Therapeutic Goods Administration, but for this presentation he will be wearing a GHTF hat as a member of Study Group 1 of the organisation, and will outlining the core elements of this regulatory model and how it is being adopted at the global level. Director, Monash Centre for Synchrotron Science, Monash University, Clayton, VIC 3800, Australia Synchrotron x-rays are unique in their intensity, brilliance and tuneable energy spectrum. Having an x-ray source with such flexibility has allowed the development of a number of novel x-ray imaging methodologies having many exciting opportunities for medical applications. Exquisite spatial resolution coupled with high time resolution and soft tissue contrast are just some of the technical enhancements that have been realised. Despite real technical promise, the use of synchrotron x-rays for patient imaging has been limited to a few studies. Nevertheless, ideas tested and proven using synchrotrons are driving technical developments which will allow the new methodologies to be translated into the clinic. Indeed, the first of these has already happened. A major investment in the Imaging and Therapy Beamline at the Australian Synchrotron will very soon give Australasian researchers one of the finest facilities in the world with which to explore the possibilities. An overview will be given of the research areas that are driving the construction of the Imaging and Therapy beamline. At present all the work is performed on animals but the facilities are being developed to allow application to humans once the benefits are clearly established. Specific examples include; x Improving aeration of premature babies: Low dose X-ray image sequences of newborn lungs that reveal alveolar structure and the first breaths after birth x Sub micron resolution computed tomography scans yielding insights into cystic fibrosis Introduction: The advent of voxel-based quantitative analysis of brain images using statistical parametric mapping (SPM) has seen strong progress in the measurement of brain function. This has not yet been extended to the spin-echo (structural) T1-and T2-weighted MR images that are important clinically. This is largely due to the patient-to-patient variations in global signal levels that appear to be inherent to structural MR. In this work we describe a method to characterise the global signal level within the brain. We then adjust for global signal level in analysis of variance in a 3D comparison between two groups of subjects and compare our results with an alternative validated approach. Methods: Evaluation of the global signal level within the brain is here based on analysis of the histogram of voxel signal levels for each subject. Three measures of the global signal level based on linear fits to the leading and trailing slopes of the histograms of the T1-and T2-weighted MR images (after masking out of non-brain tissue) were considered. They were the zero crossings of the upslope (LoX), and the downslope (HiX) and the weighted mean (WM) computed using the histogram values above 50% of the histogram peak and the linear fits below. We incorporated these measures into an analysis of 30 subjects and 29 controls using SPM2 (statistical parametric mapping) by entering them as a nuisance covariate. This effectively adjusts for global intensity. The same analysis was performed, but using a global measure from an alternative method that has been validated (for T2) against T2 relaxometry (DF Abbott et al, "Voxel Based Iterative Sensitivity analysis (VBIS): validation for T2 imaging of hippocampal sclerosis", 2008, submitted for publication in NeuroImage). The SD/mean for our global measures within this population were typically 7.7% for T1 and 6.2% for T2. The best SPM statistics for the most significant voxel were obtained, for T1 using LoX, and for T2 using WM (HiX also yielded strong T2 results). SPM analysis using the validated VBIS global measure approach detected significant changes in the same locations at comparable statistical levels. Introduction: Magnetic resonance imaging (MRI) is a widely utilized medical imaging method, however one major disadvantage of MRI comparing to other competing methods such as computed tomography (CT) is the relatively long scan time. One effective method to shorten the scan time is to reduce the amount of necessary data acquisition, but it violates the conventional Nyquist sampling theory. Compressed sensing (CS) [1] is a recent development in information extraction theory that allows signal recovery from samples acquired at sub-Nyquist rate. The data acquisition nature of MRI lends itself naturally to the application of CS, and initial implementation has already demonstrated good evidence of success [2] . We demonstrate by incorporating a prior estimation of the imaged object, which is usually available, it is feasible to further improve the CS reconstruction in MRI. Methods: Sampling in MRI is carried out in Fourier space rather than in pixel-wise fashion as the cases of conventional cameras, it can be conveniently put in the matrix form as (1) below: where M is the Fourier transform matrix, while y and x denotes the acquired Fourier data and the object magnetization respectively. Nyquist sampling theory states that same number of samples as the size the image is required to prevent aliasing artifacts at reconstruction. From a linear algebraic point of view, sampling below the Nyquist limit reduces the row rank of matrix M and thus makes the inverse problem non-solvable. Equ. (1) can be equivalently written as (2) by introducing a known linear transform ) without altering the nature of the problem. However, when ) is chosen to be a particular transform such as Wavelet and Discrete Cosine Transform (DCT), ' x may possess a sparse nature which means a significant portion of ' x are zero or neglectibly small. The sparseness is of great utility here as it allows us to discard the trivial portion of ' x with little loss in the original signal x, the same property exploited in image compression techniques such as JPEG. In terms of the linear algebra, the new matrix ) ( ) M now has a much reduced column rank, which help make the inverse problem solvable. Of course, in practice we are unable to know which portion of ' x can be safely discarded. The x . In words, in all the possible datasets that are consistent with the measurement, it seeks for the one that has the smallest overall sum of absolute values. It is easy to extend the argument that the level of sparseness of ' x promotes the accuracy of its recovery (proofed in [1] ), the sparseness is however fixed with a given x and ) . What if we instead recover a signal R x that contains the same but reordered elements as x (think of a series of numbers and its ordered form)? This new R x can be formed to possess a greater level of sparseness (in ' R x ) that promises higher recovery accuracy. Eventually the recovered R x can be re-formed to x to gain a better reconstruction compared to the case of recovering x directly. Apparently this trick requires a prior ordering knowledge, which we propose can be gained from a low resolution approximation of x that is usually available in the pilot scan stage. Results and discussion: Shown in Fig.1 below are the image reconstructions of a healthy adult's brain (axial plane) using different techniques, in each case only 33% of the samples were acquired (corresponding to a 3-fold scan speed-up). It is seen that a direct inverse Fourier transformation (Fig. 1b) results in nasty aliasing artifacts as anticipated by the Nyquist's law. CS theory alone (Fig. 1c ) manages to recover most the image however it loses some fine detail (as arrowed) and image contrast. Reconstruction using our method (Fig. 1d) , incorporating a low resolution estimate is seen to be superior to the original CS method as benefitted from the enhanced data sparseness level. Conclusion: Compressed sensing has been shown to have great potential in accelerating the MRI scan by reducing the amount of data acquisitions. We have demonstrated a method to improve the CS reconstructions by incorporating a low resolution estimation of the underling image. Introduction: Fluorescence imaging is widely used to probe biological structure and function at the cellular and subcellular levels. The fluorometric systems used within this context generally incorporate relatively complex free space optical assemblies. In this paper, we describe a simple fibre optic fluorescence spectrometry system with a wide variety of biomedical applications. This low-cost, all-fibre system is portable, robust and has the capacity to acquire fluorescence spectra at rates up to 1 kHz. With minimal change in set up, the system can be used for a variety of fluorophores and to detect and distinguish multiple dyes simultaneously. We present measurements of action potentials (AP) in the di-4-ANEPPS stained heart and the concentration of GFP-tagged bacteria. Methods: A solid-state laser is injected into a 2x2 multi-mode fibre coupler. Half of the excitation light is guided to the sample while the other half is used for laser noise monitoring. The fluorescence from the illuminated sample is collected at the tip of the excitation fibre and returns to a compact fibre-coupled spectrometer via the 2x2 coupler and a laser line emission filter. Dichroic mirrors are not required because spectral decomposition is readily achieved in software. Results: Optical measurement of action potentials: Our system has been used to record APs in a Langendorff-perfused rabbit heart stained with di-4-ANEPPS. A dual-wavelength technique 1 was used to extract APs from successive timeresolved spectra taken at a fast sampling rate of 1 kHz. As shown in Fig. 1 , the extracted cardiac APs obtained have an acceptable SNR (~10) and exhibits the key features of the cardiac action potential. During each up stroke of the heart beat, spectral decomposition gives 40% more fractional fluorescence than a dichroic mirror configuration. In a dichroic configuration crosstalk between short and long wavelengths is inevitable as the transition between the bands is not sharp. Although this crosstalk can be minimised by adding bandpass emission filters, the fluorescence signal will be reduced resulting in poorer SNR. Detection of bacteria: Bioremediation is a process by which micro-organisms restore a contaminated environment to its original state. Although gaining popularity as a realistic remedial technique, it is hindered by the lack of effective, reliable, non-destructive, and in situ monitoring methods. 2 Here we propose to use our time-resolved fibre optic spectroscopic probe to acquire signal from fluorescent degrader bacterial species that are remotely observed in the immediate environment of the probe. Preliminary experiments were performed by using exactly the same setup as for the AP measurement. The laser source was switched from 532 nm to 473 nm and the emission and excitation filters were changed appropriately. Bacteria Salmonella enterica was tagged with a constitutively expressed GFP 3 and prepared in small vials at different concentrations. We have calculated that the minimum number of cells detected within the collection volume is as low as 6 proving the sensitivity of our probe. Moreover, the fluorescence intensity measured by the probe is linear against cell concentration, which demonstrates that the signal acquired by the fluorescence probe can be converted to a particular cell concentration. This information can be used to determine the biodegradation efficiency of the bacteria by monitoring their growth. Ultimately, bacteria can be tagged with GFP to report on the expression of bioremediation genes in situ to confirm the important bioremediation activities of bacteria. Our compact all-fibre fluorometric system has the capacity to resolve emission spectra in very small collection volumes using a range of different excitation wavelengths. It is also highly flexible in that the acquired spectra can be decomposed and averaged across wavelength bands that are defined in silico. We have demonstrated that the system has the sensitivity to detect fluorescence at the cell level and to resolve the dynamic signals generated by functional probes at rates up to 1 kHz. The improved sensitivity will be particularly useful for functional fluorescence studies with two or more fluorophores. In this context, the system should allow greater selectivity and flexibility than conventional techniques that employ dichroic mirrors or filters and separate photodetectors. Work is underway to characterize the effectiveness with which the system can be used with multiple fluorophores. expensive; consequently, efficient simulation capability, accurate evaluation and optimization of system design performance are usually advantageous before the actual construction. Methods: GEANT4 simulation toolkit has been used to simulate a silicon/cadmium zinc telluride Compton camera model. Penelope physics model in GEANT4 was used to accurately model the Compton scattering including atomic-binding effect and Doppler energy broadening. Variance reduction method was used to enhance the Compton scattering cross section of the scattering detector. Our prototype was compared with already exiting silicon/sodium iodide model in terms of sensitivity and resolution. The performance of our prototype camera is based over a number of widely used radio-nuclides. Results: Our results show that our camera demonstrates higher sensitivity than silicon/sodium iodide model at lower gamma-ray energies. Also, its sensitivity was significantly improved by enhancing the Compton scattering cross section. Increment of detector segmentation, while improving the camera resolution placed a limit on its sensitivity. The increase in the sensitivity of our model at lower gamma-ray energies might not be unconnected with the higher atomic number of the absorber material. Our results show that cadmium zinc telluride could be a better absorber material for Compton camera than sodium iodide. We analyse the safety test and performance test data from typical medical equipment to determine if this standard has become outdated in managing the medical equipment inventory, when we consider it in conjunction with current manufacturing standards for medical equipment, and the wiring standards required for hospitals. We selected three items of medical equipment, Infusion pumps, ECG recorders, and external pacemakers. We examined the test requirements of AS/NZ3551, and the routine performance test requirements as set out in various manufacturer's manuals. We analysed the safety test data, and the performance test data for up to five years for the same device. We also examined the impact of the latest wiring regulations on the use of medical equipment in hospitals. The test requirements of the manufacturers varied from nil to comprehensive. The test data from the medical equipment showed that the safety test parameters of insulation resistance, leakage current etc., did not change significantly over the analysis period of up to 5 years. Similarly the performance tests showed that the equipment operated to specification each time the devices were tested. Discussion: With the safety test and performance test data showing no significant change over the life of the equipment, we discuss the impact this testing has on the risk of injury to the patient from electrocution or injury, related to operating medical equipment which is out of specification in a hospital. We conclude that safety testing and routine performance testing has minimal impact on the risk of injury to the patient, either from electrocution, or injury related to the operation of medical equipment which is not performing to specification. This begs the question; 'Why are we putting significant resources into a practise that has minimal benefit?' Should we not re-examine the testing requirements detailed in AS/NZ3551? Biomedical Technology Services, Queensland Health, Australia Introduction: The Australasian Standard AS3551 'Technical Management Programs for Medical Devices' recommends that inspection intervals for medical devices be at most annually unless a professional risk management program is in place. After many years of testing at least annually, many maintenance staff are now reporting that they consider this testing interval to be excessive for many types of equipment with low patient/staff risk profiles. The increasing quantities and complexity of medical equipment is also greatly increasing the resources needed to support an annual based program covering all equipment. The use of Professional Risk Management is becoming well developed in many Health Systems to assist with prioritisation of activities. BTS is about to upgrade from ECRI-HECS to ECRI-AIMS. The time seems about right for BTS to investigate how to take the step to using a risk management approach in several aspects of its management of planned maintenance activity. Methods: Being good corporate citizens, it was decided to use the Queensland Health Risk Management Framework in the development of this approach. This uses a likelihood -consequence table to produce a five level risk result which then indicates the level of action and reporting required. The other given was the ECRI-AIMS clinical risk equation with allows five integer factors to be added together to give a risk score between 3 and 99. Two of the factors were used to enter the worst possible patient consequence (if Clinical Staff were not paying attention to equipment operation), and the likelihood of occurrence of such a failure if the maintenance was fully up to date. Other factors were used to indicate the level of maintenance generally applied to the device category, the relative reliability of models within the device category and particular circumstances that might raise or lower risk for individual pieces of equipment. The resulting clinical risk number then gives an indication of the unmanaged risk level of the Device and can then be used to prioritise over-due planned maintenance activity. Where the clinical risk level does indicate a need for a regular inspection program, setting the interval for this program should be based on discovery rate of 'serious' failures (those that could negatively affect patient treatment) found during the program. Current discussions seem to suggest a target of about 1% to be a reasonable level to aim for in such a program. It is also worth considering the possibility of extending performance inspection intervals until preventative maintenance (such as a battery change or a kit install) is required. The catch in undertaking a process as described above is that the failure rates will need continued monitoring into the future so that intervals can continue to be optimised. The methods outlined have already been piloted and the outcome seems to agree with the 'gut feel' in most categories of equipment. One area of enlightenment occurred in the few areas where Biomed departments are responsible for device types that can negatively affect multiple patients ie Medical Gas Supply or Reverse Osmosis Systems. Discussion: The other major factor that undertaking this sort of approach brings is that of Inventory accuracy. In order not to overestimate the amount of Inspection and PM work reported as not complete and to be able to meet completion targets, it is important to ensure that inventory is accurate and that 'missing' items are routinely followed up and resolved. Conclusions: Using the approach described seems to meet with current Health Management Practice but before implementing such a system, a BME group should ensure that the Management of their Health Care Group does understand and is prepared to support this Risk Managed approach to Maintenance of Biomedical Technology. Introduction: A unified biomedical asset management system (BAMS) has been created by a collaboration of biomedical engineers. The system incorporates: asset maintenance, area testing, job recording and billing, reporting and testing device interfacing. The motivation for the project is that systems to date have not met utility, maintainability, portability and cost requirements. The collaboration has engaged Latrobe University Computer Science and Computer Engineering Department to develop the system. Two groups of graduating Computer Systems Engineering students have engineered the system with final delivery in October 2008. The two systems will be demonstrated. Discussion: An analysis of the benefits and shortcomings of each particular system will be presented. The proposed benefits are: 1. Developed by a collaboration of practicing biomedical engineers across major hospitals x Only cost is an ongoing maintenance fee 2. Unified nomenclature for devices x ECRI (UMDNS), EC (GMDN), TGA (ATRG) x Simplifying reporting for DHS (Vic) Targeted Equipment Program (TEP) Interfaces with testing equipment x eg. Fluke/Bio-tek 601 Pro, MEM ELEC-3S 4. Uses new www.bme.asn.au Wiki x NCPE and ACHS common testing processes and protocols x Designed to allow biomeds to share processes, if they wish x Accommodates both AS/NZS 3551 and the new IEC 62353 standards 5. Portable and Interoperable x Web based AND standalone use, with synchronisation and conflict resolution x Works on the widest range of operating systems and devices o PC's, Macs, laptops, PDA's, web phones, phones Conclusions: A unified biomedical asset management system encourages best practice in biomedical engineering, leading to better healthcare outcomes. Many may be aware that it has become the practice in medical imaging to no longer repair to board level, but to module level only. The module can be an extensive system or simply a preamplifier. This module replacement servicing is sold to the customer on three fronts. 1. It enables the supplier to maintain the quality accreditation of its product. It enables rapid turn-around time. It enables less skilled technicians to repair equipment. But is it a means by which the supplier can sell low but recoup its costs through servicing? I have had recent experience where such a methodology if accepted would have disadvantaged my organisation financially. A quote received to replace a collimator on a catheter lab. image intensifier. Quote was for $100,000. Solution was to tighten a grub screw on a drive shaft. A quote received to replace an image intensifier video chain for $50,000. Solution was to resolder a dry joint. A 64 slice CT, under warranty failed. Firm wheels in a replacement computer stack. The power supply fails. The firm wheels in a new stack which refuses to boot. Firm decides to import new stack from overseas. Hospital technician insists on exchanging power supplies. System boots and the CT is on the air. Solid state imaging array fails. Repair cost quote $198,000. New array can be purchased from the USA for $86,000. Why the extra costs? Investigation reveals that Australian prices for parts from the companies are higher than US prices. Most hospitals are willing to place high end equipment under comprehensive contracts. This enables firms to place any cost on this product. This paper explores possible solutions through; 1. Integrated contracts. Use of third party servicing agents 3. Re-skilling of BME staff. Electronic Health Records will be the backbone for the patient record and clinical engineers will be required to design and support quality data feeds for two types of data flows; data flow to and from equipment at the bedside and automated data flow via linkages to the EHR. The responsibility domain for the clinical engineer has not changed in terms of equipment, acquiring data and data display, but data linkage, data flows and down stream processing of this data has greater importance. Designing local medical networks and managing change and risk for the system and components of the system is however a new role; the ultrasound acquires images and measurements, the measurements are sent to an appropriate device, images and measurements and reviewed and diagnosis and reports completed. This completed report with images completes part of the patient record and treatment plan. Ensuring required data and images are available and correctly mapped over time is important to patient safety. The required skills for Clinical Engineering are:-practising clinical engineer, knowledge of biomedical equipment management systems, systems engineering, identifying and managing risk, understanding of network concepts and management of change. The early detection and resolution of device/system linkage issues and device/system technical events to Electronic Health Record interfaces can minimise adverse clinical outcomes. Clinical Engineers are well positioned to design local medical networks of systems; manage change for the systems and various components of the system. Clinical Engineers are essential for successful and safe operation of medical critical systems designed around networked medical devices and systems. Speaker: Dr Simon Ling, MD, Director Sales and Marketing, IBA The construction of a Proton Therapy Center is a complex and lengthy process. With 6 proton therapy facilities already installed and another 7 currently being installed, IBA has the most experience in Proton Therapy facilities installations. This presentation describes the full process from designing, constructing, installing and commissioning of an IBA Proton Therapy center. Pictures and descriptions on previous and current constructions are shared. It also describes the timeline needed for the installation of the PT center. Director, Monash Centre for Synchrotron Science, Monash University, Clayton, VIC 3800, Australia The uniquely flexible properties Synchrotron x-rays are allowing researchers to explore 3 novel radiotherapy modalities. These are Stereotactic Synchrotron Radiotherapy (SSRT), Photo Activation Therapy (PAT) and Microbeam Radiotherapy (MRT). SSRT and PAT exploit the tunability of the energy spectrum to target specific elements which have been introduced into the tumour and thereby significantly enhance the dose delivered to the tumour whilst minimising the dose to surrounding tissues. MRT is fundamentally different and based upon the dose volume effect. It makes use of the high collimation inherent in synchrotron beams to allow the treatment beam to be spatially fractionated on a micron scale. This has the effect of allowing single fraction doses more than one hundredfold greater than conventional systems to be delivered that are sparing on normal tissue. Whilst none of these ideas has yet reached clinical trials, all of them are showing significant promise. An overview will be given of the current state of the art. Highlighting work at the European Synchrotron Radiation Facility (ESRF) in Grenoble, France, the National Synchrotron Light Source (NSLS) in Brookhaven, New York and at SPring-8 in Japan. Major investments are being made in facilities at the ESRF and at the Australian Synchrotron. The Imaging and Therapy Beamline at the Australian Synchrotron will give Australasian researchers one of the finest facilities in the world with which to explore the possibilities. Introduction: The aim of this study was to demonstrate a palliative effect of MRT in two different mouse tumour models, and then describe the short-term cellular response to MRT in normal and tumour tissues. Methods: All irradiations were performed at the SPring-8, synchrotron in Japan using a polychromatic x-ray beam with median energy of 110 keV. The beam was segmented into an array of 30 micron microbeams with peak-to-peak separations of 200 microns using a fixed collimator. BALB/c mice were inoculated subdermally on the right hind leg with EMT-6.5 or 67NR tumour cells and irradiated 6-9 days later with either 2 x 280 Gy or 2 x 560 Gy peak, in-beam air doses of MRT. The beams were delivered in a cross-hatched fashion, rotating the mouse 90 degrees between the first and second irradiation. Mice were culled when the maximum length of the tumour reached 11 mm. In other MRT experiments, dorsal skinflaps and EMT-6.5 tumours were irradiated and tissues collected for immunohistochemical studies at different time points over the following 5 days. Four hours before culling, the mice were injected with BrdU, a marker of DNA replication (proliferation). The median survival times for EMT-6.5 and 67NR tumour bearing mice following MRT (2 x 560 Gy) increased from 16 to 29 and 27 to 42 days respectively compared to unirradiated controls (P< 0.0005). Immunohistochemistry for phosphorylated ȖH2AX (a surrogate marker for DNA DSBs) demonstrated nuclear localisation within cells situated in zones through the tissues that correlated well with microbeam spacing. ȖH2AX-positive cells were readily detectable from 6hrs to at least 4 days post MRT, suggesting that cells receiving very high levels of radiation fail to repair all DNA double strand breaks. Double immunohistochemical staining with antibodies to ȖH2AX in conjunction with BrdU, CD31, CD45 and other markers has provided insights into both the normal and tumour tissue response to MRT. Within 24 hours of MRT there was evidence of cell migration both into, and out from, irradiated zones. This was particularly true in the case of the EMT-6 tumour tissue. The spatial localisation of the microbeams, visible 30 minutes post-MRT, was lost by 24 hours. MRTirradiated tumours were less proliferative than unirradiated controls. Radiation-induced apoptosis was observed at least 8 hrs post MRT . Irradiated endothelial cells in the tumour were still present up to 4 days post treatment. The EMT-6 tumour cells are highly migratory post-MRT as irradiated and minimally irradiated cells intermix. MRT reduces tumour cell proliferation. There is some evidence of apoptosis post-high dose MRT in tumour and skin. There is little evidence of a microvascular effect. Introduction: The purpose of this work is to investigate the radiosensitisation by the gold nanoparticles (AuNps) to the cells for radiotherapy treatment. Dose enhancement factor were used to quantify the radiosensitisation for superficial x-ray, electron and photon beams radiotherapy. The effectiveness of the gold nanoparticles as the dose enhancers are also being tested for microbeam radiotherapy. Methods: Bovine Aortic Endothelial cells (BAECs) were irradiated with kilovoltage superficial x-ray, megavoltage electron beams and photon beams in the presence of various concentrations of AuNps (0.25 to 1 mMol). BAECs were cultured with AuNps 24 hours before irradiation and confocal microscopy image confirm that the AuNps were inside the cells during irradiation. Cell survival was measured by colorimetric assay. This experiment was repeated using AuNps doped nPAG (normoxic polymer gels) to confirm the dose enhancement produced by AuNps. For microbeam radiotherapy study, two samples of HaCat cells and BAECs with and without AuNps were irradiated with 25μm, 10mm high parallel microbeams at 25 Gy, 100 Gy and 200 Gy. Results: Discussion: AuNps was found to enhance the cells killing up to 21 fold for 1mmol of AuNps for 80 kVp x-ray beams. For megavoltage electron and photon beams, there are also dose enhancement but to a lesser extent. Dose enhancement also found to be optimum at 80 kVp with around 40 keV mean energy. Cytotoxicity tests showed that AuNps reduced the cells viability to 40 %. Measurement using AuNps doped nPAG also exhibit higher polymerisation showed that there was increased in photoelectric interaction, Auger electron and characteristic X-rays. AuNps are also found to be effective for reducing the dose used for microbeam radiotherapy. This in-vitro study demonstrated that AuNps enhanced the dose to the cells in the kilovoltage range of x-ray beam used and to a lesser extent when the megavoltage beams was used. The level of radiosensitisation depends on the AuNps concentration, radiation energy and radiation dose. AuNps also shows therapeutic advantages of microbeam radiotherapy where lower doses are required to treat tumours compared to the treatment without AuNps. Introduction: Intensity-Modulated Radiation Therapy (IMRT) achieves optimal dose conformity to the tumour, avoiding critical structures, through the use of spatially and temporally modulated radiation fields. The average dose-rate and instantaneous dose-rate (pulse amplitude) to a single voxel in the treatment volume, are highly variable within a single IMRT fraction. In this study we isolate these variables and determine their impact on cell survival. Methods: Two cell lines of differing radiosensitivity were examined; human melanoma (MM576) and non-small cell lung cancer (NCI-H460). The cell survival fraction was assessed using a clonogenic assay, following exposure to a 6MV photon beam produced by a Varian Clinac 21X. To mimic and isolate known temporal variables the cell cultures were exposed in three ways: x At the isocentre, varying the pulse repetition frequency [PRF] x Keeping the PRF constant, but varying the distance from source to cell layer x By varying the source to cell layer distance and PRF such that the overall treatment time was constant. The survival fraction was observed to be independent of instantaneous dose-rate. A statistically significant trend to increased survival was observed as the average dose-rate was decreased, for a constant total dose. Discussion: The results are relevant to IMRT practice, where average treatment times can be significantly extended to allow for transit motion of the MLC and beam positioning. Our in vitro experimental study adds to the pool of theoretical evidence describing the consequences of protracted treatments. We find that extended delivery times can substantially increase the cell survival. This also suggests that regional variation in the dose-rate history across a tumour, which is inherent to IMRT, will affect radiation dose efficacy. Conclusion: Temporal factors in radiation therapy have a significant effect oncell survival and ultimately on treatment outcome. Contemporary radiobiological models underestimate the importance of dose rate effects within the single fraction range. The development of new radiobiological models that incorporate realistic temporal effect, in line with IMRT treatment duration, need to be developed. The conventional model of radiation cell kill is grounded in the assumption that a local interaction between the radiation and cellular DNA is required to cause cell death. It is now known that multiple mechanisms exist that can lead to cell death ('bystander' cell killing) and regulation, including the influence of signaling molecules generated distal to a particular cell. To explain experimental results, which challenge the conventional model, we have developed response models that incorporate response to communication signals generated non-locally. These models have been tested against our experimental results, enabling us to quantify the relative proportions of direct and bystander cell killing. Methods: Three models were considered. All included a direct radiation interaction as the primary cell killing mechanism, described by a linear-quadratic response. The models incorporate a secondary response to a bystander signal, where the signal was generated by the direct action of radiation via a lethal (Model 1), lethal or non-lethal (Model 2), or non-lethal (Model 3) interaction with another cell. The models were fitted to the experimental results for two cell lines, malignant melanoma (MM576) and non-small cell lung (NCl-H460), following uniform and non-uniform exposure. The model parameter values were then derived. Results: The experimental data was best fitted using Models 1 and 2 suggesting that the cell response to bystander signals is likely to be a lethal event. It was initially expected that Model 3 would provide a better fit since this model naturally leads to a saturation of bystander cell killing with increasing dose, an effect that has previously been observed in the context of alphairradiation experiments. The parameters extracted from the models can be used to quantify the physical mechanisms involved in direct and bystander-induced cell response. Fig. 1 shows an example of the contributions of cell killing to overall cell survival for the NCI-H460 cell line using Model 2. The gradient field experiments performed on the two cell lines represent ideal situations for investigating these models in that they allow separation of the direct and bystander survival components. The parameter values derived for the models identify the bystander signal as an important factor in determining the ultimate cell survival fraction. The bystander component of cell response was much greater in MM576 than NCl-H460 cells. For all models, bystander response accounted for between 20% and 50% of total cell death. The results modeled in this study cannot be explained using conventional independent-cell response models, providing more evidence for mechanisms of tumour response that involve inter-cellular signaling. The models investigated adequately replicated the experimental results, though considerable knowledge will be required of how cellular communication acts in vivo before the models can be extended to clinical situations. Efforts are currently underway to more fully understand the competing mechanisms involved in modulating bystander responses including the effect of temporal dose modulation. Acknowledgements This work was supported by a research project grant from the NSW Cancer Council. Introduction: This report from the Trans-Tasman Radiation Oncology Group (TROG)/ ACPSEM physics advisor (AH) will provide an overview of recent clinical trial activities that are of relevance to the physics community and an update of developments in software to support clinical trial radiotherapy (RT) case reviews. Methods: Over the past 2 years there has been an increasing interest in using advanced technology in clinical trials. Significant physics involvement began with the "RADAR" trial that opened in 2003 and was used as a vehicle for introducing conformal radiotherapy with dose escalation in many centres. With the success of RADAR, TROG now encourages new trials to include advanced technology which may enable centres to draw on the support of other centres to introduce new techniques and prove their efficacy. Clinical trial participation is often seen as an additional burden on an already busy clinic however clinical trials are necessary to confirm new treatments are superior to conventional therapies and trial QA is necessary to ensure trial results are valid and can be translated to every clinic. IMRT credentialing overseas has revealed significant errors in dose delivery in some centres and has been welcomed in other centres as reassurance that they have achieved an independently determined acceptable standard of accuracy in treatment delivery. Additionally, feedback that facilitates streamlining of processes has improved efficiency in treatment delivery and provided confidence to extend services to multiple treatment sites. Significant physics involvement in developing an IMRT credentialing program will now enable Australasian trials to include IMRT techniques and with the success of this group comes the enthusiasm to develop a wider portfolio of credentialing programs for other advanced technologies such as IGRT, advanced imaging techniques, motion management, stereotactic techniques (cranial and extra cranial) and brachytherapy which provides physicists with the opportunity to become involved in clinical trials and access the support of other centres in introducing new technologies or streamlining existing processes. Objective assessment of treatment planning data is being provided by the SWAN software system, which will continue to be developed following allocation of grant support from Cancer Australia. This includes development of additional plan assessment facilities (including IMRT-specific tools), development of web-based access and support facilities, and the development of tools to aid the use of SWAN in educational activities. Additionally, SWAN has been adopted by TROG for the quality-assessment of planning data in TROG trials, with remote access functionality being provided via thin-client technologies. Conclusion: Increasing interest in including advanced technology in clinical trials provides an ideal opportunity to validate their efficacy, provide support and assistance to centres introducing new techniques and confirm through audit processes, appropriate implementation. Physics support in developing credentialing programs and advancement of tools to verify 3D treatment plan review now confirms Australasian clinical trials meet International standards in quality control. TROG IMRT Working Group* Introduction: It is expected that radiotherapy centers undertaking an intensity modulated radiotherapy (IMRT) program have established considerable expertise in radiotherapy and in the process have followed appropriate recommendations for the implementation, commissioning and quality assurance (QA) of their entire IMRT program. All centers participating in a clinical trial utilising IMRT may not perform QA to the same level or follow similar practices during plan generation and delivery, hence guidelines are required to ensure a consistent and safe approach is applied to deliver robust trial results for each IMRT trial component. In October 2007, under the auspices of the Trans-Tasman Radiation Oncology Group (TROG), an IMRT working group was established with the aim of developing an IMRT credentialing program for TROG clinical trials. The working group identified several key areas of the clinical trials process that required further development to accommodate the use of IMRT, these included a facility questionnaire, trial protocol template, benchmarking exercises and the treatment plan review process. The benchmarking component of clinical trials QA will be discussed in detail. Discussion: It is difficult to establish a common set of guidelines to address the diverse range of technologies and techniques that fall under the generic definition of IMRT. The additional QA measures imposed on each trial utilising IMRT must be robust, reasonable, achievable and consistent with the current clinical trials environment that exists in Australia and New Zealand. IMRT is already being used in clinical trials. The ongoing technological evolution of radiotherapy dictates that the various disciplines involved in clinical trials development must prepare to address the use of these future technologies and evolving techniques for meaningful trials outcome analysis. Introduction: The PROFIT trial (TROG 08.01) is a randomised trial of hypo-fractioned external beam radiotherapy for prostate cancer with OCOG (Ontario Clinical Oncology Group) as the lead group. Both arms of the trial allow for IMRT to be used to meet strict dose constraints for the target and organs at risk. Prior to site activation, a credentialing process will determine whether the site is eligible to participate in the trial. As part of this process, each facility that intends to use IMRT for trial patients is required to plan and deliver an IMRT treatment to an anthropomorphic phantom, namely the Elvis phantom used in the RADAR trial (TROG 03.04). A site visit by an independent quality assurance team will carry out a dosimetry audit of the Elvis phantom's IMRT treatment using ionisation chamber measurements and radiochromic film. The methodology for the credentialing will follow that of the RADAR trial. The anthropomorphic pelvic phantom (Elvis) will be used to audit IMRT treatment delivery. Other parts of the facility accreditation include completing a facility questionnaire and completing five "dry-run" treatment plans which are reviewed externally. 3D real-time review of treatment plans is an integral part of quality assurance of PROFIT. All treatment plans will be reviewed by a clinician and an RT/physicist from the trial QA team prior to randomisation using the SWAN software package, with feedback given within 24 hours of submission. The treatment plans will also be reviewed in 2D by PROFIT reviewers in Canada following randomisation with feedback given within the first three fractions of treatment. Results: An on-site protocol has been developed for credentialing facilities participating in the PROFIT trial. The protocol aims to audit the treatment delivery of IMRT including methods used for confirming phantom set-up. The dosimetry audit will comprise dose plane measurements with radiochromic film (Gafchromic EBT) and point dose measurements with an ionisation chamber (Scanditronix Wellhofer CC13). Two coronal dose planes will be measured, one through the isocentre and the other 2 cm posterior to the isocentre (in the vicinity of the prostate-rectal wall interface). The exposed radiochromic film will be analysed with a document scanner that has been commissioned according to the methodology described in van Battum et al 1 . A gamma analysis will be performed in comparing measured dose planes and treatment planning system dose planes using a 5%/3 mm criterion. Total dose at the isocentre, measured by the ionisation chamber, will be compared to predicted total dose with a 3% criterion for acceptance. The RADAR trial quality assurance used TLD's to monitor dose at defined points of clinical interest in conformal treatment plans of the Elvis phantom. The choice of radiochromic film rather than TLD's for dose measurement in an audit of IMRT delivery allows analysis of more dosimetric information at higher spatial resolution, essential for IMRT. Conclusions: A protocol has been developed for credentialing of radiotherapy facilities participating in a clinical trial (PROFIT) allowing IMRT treatment delivery. Introduction: Credentialing of radiation oncology centres is an essential quality assurance (QA) measure in multicentre clinical trials involving complex radiotherapy techniques. It typically includes a trial planning exercise, which requires contouring of the target volumes and organs at risks and developing a treatment plan that conforms to all dose volume constraints specified in the study protocols. The aim of the present study was to determine the variability of treatment plans for partial breast irradiation using external beam three-dimensional conformal radiotherapy (3D CRT) in the centre credentialing process for a Trans Tasman Radiation Oncology Group (TROG) study "TROG 06.02 A multicentre feasibility study of accelerated partial breast irradiation using 3D CRT for early breast cancer". Methods: Computer tomography (CT) image sets of two patients with early breast cancer treated with conservative surgery were selected for the planning study. The patients had a right-sided and a left-sided breast cancer, respectively, and one of the patients had surgical clips placed at the base of the surgical cavity. The patients were scanned within 12 weeks from surgery and the surgical cavities were visible. The two CT image sets were provided to the seven centres that participated in the credentialing process and contouring of the protocol-specified target volumes and organs at risk were completed by investigators at the respective centres. Treatment plans were generated for each of the two cases at each centre according to protocol specifications Five of these centres also generated the treatment plans using the same two CT image sets in which the target volumes and organs at risk were contoured by the Study Chair. Dose volume parameters were scored for the target volumes and organs at risk including the ipsilateral lung, contralateral breast and heart. Results: Figure 1 shows the results of the dosimetric comparisons of treatment plans generated using volumes contoured by individual investigators at each centre (variable outlines) and volumes contoured by the Study Chair for each of the two test cases. The dose volume outcomes for the target (shown here is the volume receiving 90% of the prescribed dose, V90%) are similar. However, the volumes of ipsilateral breast receiving a high (95%) or medium (50%) dose were reduced when identical outlines were used, as was the lung volume receiving 30% of the prescribed dose. In general, the variation of dose distributions achieved by different centres using identical outlines was found to be smaller than the variation in dose distribution when each centre developed their own outlines. Conclusions: Even in complex treatment plans such as the non-coplanar field arrangements used for 3D conformal partial breast irradiation, there is a significant influence of outlining practice on dose distribution. Given the additional variation in dose delivery introduced by contouring, quality assurance in clinical trials needs to carefully assess both, technical approaches and the contouring of relevant structures. Therefore, the technical review of trial patients requires both clinical and technical input. Acknowledgement: The collaboration of Princess Alexandra Hospital, Brisbane, Calvary Mater Hospital, Newcastle, Royal North Shore Hospital, Sydney, William Buckland Radiotherapy Centre, Melbourne, Auckland City Hospital, Auckland and Waikato Hospital, Hamilton is acknowledged. Auckland City Hospital, Auckland, New Zealand The TROG 06.02 Accelerated Partial Breast Irradiation study was opened at Auckland Radiation Therapy department in early November 2007, and we were able to accrue patients onto the trial until May 2008 when it closed after reaching the required total number. The department implemented a very successful multidisciplinary approach for this trial which ensured that the deadlines required for the real-time plan review were always met. The department was fortunate enough to have a dedicated radiation therapist (0.8 FTE funded by The Breast Cancer Research Trust, NZ) assigned to co-ordinate all aspects of the running of this trial. A very close working relationship was maintained with the physicist in the team as patients were recruited, planned and treated. Excellent communication with all staff in the department was maintained right from the beginning, so that everyone had the opportunity to learn about the trial and provide feedback. This in turn led to comprehensive training being provided to all staff groups since the technique required some procedures that were new to the department. Due to the large technical emphasis of the trial it required quite a lot of physics time right from the initial credentialing stage through to when recruitment ended. The complex planning technique used was completely new to Auckland and so there was quite a steep learning curve and a lot of work required during the credentialing and implementation stages. Problem solving also became quite a large requirement as each individual patient plan was carried out. Therefore it was essential for the success of the trial that the physicist was involved in the early stages of implementation. This highlights that it will be very important for physicists to dedicate time to future trials which contain complex technical components. Introduction: There has been great interest recently in the application of highly conformal hypofractionated radiation therapy for early stage lung cancer. In these treatment approaches typically fractions of 10 to 20Gy are delivered to the target using extracranial 'stereotactic' approaches with many non-coplanar fields. A proposal has been presented to the Transtasman Radiation Oncology Group (TROG) to conduct a randomised trial of extracranial stereotactic radiotherapy for lung cancer (3 x 20Gy) compared to conventional fractionation (30 x 2Gy) delivered together with chemotherapy. Only patients with peripheral lesions will be eligible as the high dose per fraction has been shown to cause significant toxicity for targets close to vital structures. The novel treatment approach is technologically challenging and requires stringent quality assurance. The present paper describes the treatment techniques and provides some suggestions for technological solutions. Approach: There are several technological issues that must be considered in the proposed trial: 1. Treatment planning: The trial will require 8 or more non-coplanar non-opposing radiation fields of 6MV X-rays that should achieve a conformity index (volume covered by prescription isodose / PTV) of 1.3. IMRT will not be allowed initially because of its complexity, the dosimetric uncertainty with small field segments applied to a moving target in lung, the length of time required to deliver a high dose and the lack of established QA procedures within TROG. 2. Immobilization: Including image guidance, the delivery of a fraction of 20 Gy is expected to take more than 30 minutes. It will be essential to position the patient reproducibly and comfortably for treatment. At a minimum, immobilization should consist of a whole body vac-loc bag. 3. Motion management: In order to keep the dose to surrounding normal structures as low as possible it will be important to restrict the size of the internal target volume (ITV). Motion corrected data acquiring such as 4D CT will be required for treatment planning. Several methods can be used to address tumor motion: a. The motion is accounted for in an ITV without use of other motion correction methods -this is likely to be only successful for tumours with small motion. b. Abdominal compression can be applied to reduce motion. c. Radiotherapy can be delivered in a gated fashion. d. Tumour tracking may be employed. 4. Image guidance: It has been demonstrated that even with good immobilisation tumors can vary in position in relation to the bony anatomy from day to day. As such, image guidance will be mandatory. The imaging technology should have enough soft tissue contrast to be able to visualise the tumor itself. The use of implanted fiducial markers would be a good method to achieve this. In order to provide meaningful data for the overall treatment, we propose to acquire at least three image sets prior, in the middle and towards the end of treatment. Discussion: Hypofractionated radiotherapy for early stage lung cancer constitutes a significant change to conventional radiotherapy approaches. It has been used with excellent tumour control outcomes in North America, Japan and Europe. The trial proposed for Australasia through TROG explores this concept and its feasibility further by comparing it directly to chemoradiation as the best possible conventional approach. It is anticipated that the trial will recruit approximately 100 patients from multiple radiotherapy centres over 5 years. Given the high doses involved quality assurance will be essential and participating centres will be required to undergo a credentialing process. In addition to this site visits and real time review of participating patients are planned. Conclusions: The proposed trial will combine a variety of technologically advanced features of radiotherapy to deliver very high doses per fraction to lung cancer patients. The present paper will explore the technological challenges in this process and discuss methods to address them with the result that many radiotherapy centres in Australasia will be able to participate in the trial. A fundamental component in the innovation process is the licensing and technology transfer agreement. This presentation will review the key issues that arise and include the subject matter or intellectual property involved, the nature and extent of the rights being granted or obtained, the possible financial arrangements as well as other commercial issues, and also general issues which usually need to be considered and negotiated. A Medical Device Partnering Program has been established in South Australia to provide a streamlined approach to the medical device product development process, from early stage concepts right through to manufactured products. The Medical Device Partnering Program is end-user and market driven and engages researchers, clinicians, client services and companies. The program provides a platform for identifying and assessing new product opportunities with clinical relevance, undertakes early-stage technical exploration projects to develop concept models and demonstrate product potential, coordinates and manages relationships between stakeholders, and provides advice and assistance along the product development pipeline. The Medical Device Partnering Program is an initiative driven by Flinders University and supported by research, government and commercial partners. The program was officially launched on 21st July 2008. This presentation will provide an insight into the program models, and an update on current activities. Moving from a prototype to production can often be a frustrating process. The process can be eased by involving the customer and the future manufacturer in the development cycle at an early stage. The importance of understanding the customers needs accurately should be self evident but is frequently overlooked. Some of the key manufacturing considerations include: -Design For Manufacturing (DFM) -Design For Testing (DFT) -Component selection -Information in Bill of Materials (BOM). A stage gate system can help control the process leading up to product release. In this presentation I will discuss the above points and their effect on product cost, quality and serviceability. Speed to market is also frequently important and is set by getting these factors correct. The elements of good manufacturing quality will be presented, such as know-how, documentation, change control, quality system, and reliable suppliers. Manufacturing for an international market requires compliance with marketplace regulations. Two topical items are RoHS and WEE. These requirements will be introduced. Introduction: Monte Carlo simulation of contrast-detail experiments requires the simulation of about 10 11 photons. This places extreme demands on computing requirements. Only now do readily available computers have sufficient performance to accurately model such experiments in reasonable times. Such a simulation was used to study the effect of focal spot size on contrast-detail performance. Methods: The EGSnrcMP Monte Carlo code was used to model a digital radiography (DR) system. A contrast-detail pattern placed on a patient-equivalent-phantom was modelled. Simulation of the imaging system included the focal spot of the x-ray tube, couch top, anti-scatter grid, automatic exposure control (AEC) detector and image receptor. A number of simplifications were made in order to decrease the runtime. Three 2.5 GHz AMD Athlon computers were used, taking eight days CPU time per image. Images were produced for a range of focal spot sizes, four of each image being produced. These were scored by eight observers. Contrast-detail curves were plotted, which were analysed using a multivariate least-squares regression. The results showed that despite the presence of scattered radiation and quantum noise in the images, a true point source produced the best image quality. The results were statistically significant. Discussion: It is often claimed that since diagnostic x-ray image quality is quantum limited, below a given focal spot size, no further improvement in image quality is possible by further reduction of the focal spot size. This assumption was shown to be incorrect. However, if the detective quantum efficiency (DQE) is redefined to be a system descriptor rather than just an image receptor descriptor, then the results of the experiment are found to be consistent with the theory. Conclusions: Monte Carlo simulation is a viable and very useful method for simulation of contrast-detail experiments and image production. Focal spot sizes cannot be chosen just on quantum noise arguments. Introduction: Gold Nanoparticles have been proposed as a new basis for contrast media in diagnostic radiology. Good Results have been obtained in a small animal study by Hainfeld et al. Their experimentation was conducted using mammographic imaging equipment. Although there has been some investigation into contrast-aided mammography, most procedures employing contrast agents utilize higher tube potentials to produce more penetrating x-rays. Methods: phantom was constructed from Perspex. This phantom featured cylindrical wells (4mm in diameter) to simulate a small portion of blood vessel. Each was loaded with either gold nanoparticle solution or iodinated CM at equal concentration (0.5077 M radiopaque element). This phantom was imaged under full scatter conditions in computed radiography (40-80 kVp) and computed tomography (80-140 kVp). Images analysed for by signal to noise ratio. These values were compared to theoretical signal values based on x-ray tube spectra collected by CdTe detector with MCA and known mass attenuation coefficients for bulk gold and iodine. Results: Image data was in close accordance with theoretical findings. At low diagnostic tube potentials, less than 50 kVp, gold nanoparticles displayed up to 60% greater signal-to-noise ratio than iodine. Gold nanoparticles were also particularly effective in CT imaging. At 140 kV, Gold nanoparticles produced over two times greater signal than iodine. At energies between 60 and 100 kVp, little difference in SNR was observed. CdTe attenuation spectra are in accordance with image results. Gold nanoparticles show a greater probability of attenuation than iodine for photons below approximately 35 keV and above 80 keV. Discussion: Data indicates a correlation with expected results. Low energy images show greater attenuation and subsequent signal using gold nanoparticle solution over iodinated CM. Here the probability of photoelectric effect is dependent on an atom's atomic number, giving gold a distinct advantage (Z Au =79, Z I =53). Gold nanoparticles were also particularly effective in CT imaging. At these potentials Compton scattering is the dominant x-ray interaction. Compton scattering relates to a materials physical and free electron densities. Gold has only a 50% greater physical density than iodine, but these findings suggest that there may be an increase in free electron density in gold nanoparticles due to quantum size effect, manifest as a Iodine decrease in binding energy of electrons on the large percentage of surface atoms. Images at angiographic tube potentials, however, displayed comparable contrast enhancement between both samples. This could be predicted from iodine's k-edge value at 33 keV, giving both materials similar attenuation coefficients for most of this energy range. Conclusion: Data indicates that a solution bearing gold nanoparticles would be an effective alternative to iodinated CM diagnostic radiology. Image contrast is comparable to iodine at angiographic tube potentials and improves significantly at mammographic and CT energy ranges The project has been undertaken in three phases. The first phase required re-surveying of radiation dose related parameters (DLP, CTDIvol) in addition to careful recording of patient size related parameters and technique factors for each examination. Phase 2 has involved attempts to optimize the radiation dose and technique across the three pilot sites. The optimization process involved a step wise process of (typically) mAs reduction under the watchful eye of a radiologist. This process is aided by our ability to perform inter-site technique comparisons. Phase 3 is a post adjustment re-survey. Comparison with patient related data recorded in Phase 1 provides some surety that the reported dose reductions are legitimate. The Phase 1 radiation dose and technique survey showed median DLP at the highest dose site to be between 15 (head scans) and 139 (KUB studies) percent greater than at the lowest dose site, without substantial variations in average patient size or scan length. The main distinction in scan technique employed was the type of dose modulation utilized. For example, the abdomen scan technique involved both patient size (ACS) and Z axis (ZDOM) functions at GCH, whilst PAH and TH employed only angular modulation (DDOM) and ACS functions respectively. Preliminary phase 3 data at PAH shows median DLP reductions ranging from 38% for routine head, to more modest reductions of approximately 10% for body scans that employ Philips' dose modulation features. To date, body scan dose reductions have been constrained to that achieved by changes in the dose modulation function utilised eg. DDOM replaced with ACS and ZDOM. Discussion: Substantial dose reductions have been achieved for head examinations which utilize a fixed mAs scan technique without dose modulation of any kind. The "optimized" median DLP of 736mGy.cm compares favorably with published DRLs 1,2 , but remains clearly higher than that achieved by Heggie 1 , possibly due to the extended scan length technique employed (no head tilt) at PAH. Other optimization efforts have to date been limited to that achieved by changes in dose modulation method. Despite the assistance of local Philips staff, we are yet to devise a fool proof methodology for controlled dose reduction where the ACS system is active. The problem is primarily related to the system's tendency to adapt (without operator intervention) to the size of the recently scanned population by changing the "recommended" mAs (and therefore patient dose). A multi-centre CT radiation dose optimization process has successfully produced substantial dose reductions for the routine head scan technique. Dose optimization in conjunction with Philips's CT automatic exposure control system (ACS) has been problematic to date, however modest dose reductions have been achieved subsequent to implementing changes that were highlighted by simple inter-site technique comparisons. It is our intention to expand this project to other QH hospitals, scanners and a broader range of examinations. 1 Introduction: In recent years modern screen:film technology has led to a generalised reduction of the mean glandular dose (MGD) from mammography in NZ, resulting in an average MGD on 1.08mGy to the accreditation phantom in 2005 and 2006. Film is now being overtaken by the introduction of various digital technologies which have a significant impact on the magnitude and distribution of mammographic doses. Method: Data is collated from compliance surveys performed by Medical Physicists according to the ACPSEM Position Paper. I will present data from a selection of technologies in clinical use in public and private institutions in NZ, including some within BreastScreen Aotearoa. Conclusions: Direct digital technology is generally associated with the lowering of doses; due mainly to the use of harder radiation fields, which are better matched to the digital detectors then the traditional Mo:Mo, target:filter, combination. The reduction in MGD, and its magnitude depends on the technology deployed by the manufacturer and upon the medical physicist's setting of image quality parameters. There is much helpful guidance in the literature and manufacturers' recommendations. Direct digital technology is however expensive and uneconomic for many. On the other hand, Computed Mammography tends to be associated with an increase in dose of 50% or more. The optimum radiation field is less well established and recommendations less definite. It is rapidly replacing screen:film mammography where practices are changing to digital radiography and it is often seen as cheaper and simpler to change to computed mammography than to keep processing facilities for only mammography. This presents considerable technical and ethical challenges to radiological physicists. Glenn Stirling and Tony Cotterill Introduction: The National Radiation Laboratory (NRL) has surveyed the use of computed tomography (CT) scanners for medical diagnosis in New Zealand each decade since 1988. The most recent survey was carried out in 2007. This paper reports the findings of the most recent survey with the principal intentions of establishing national reference dose levels for the CT examinations and comparing practices and doses with past NRL surveys and international reports. Methods: The survey was designed to obtain and analyse information provided on: 1. the frequencies and type of procedures being performed; 2. the age distribution of patients undergoing these procedures; 3. the patient doses for these procedures. To obtain this information, each of the CT facilities in New Zealand were asked to record on NRL-supplied forms for scans of the head or trunk region for the next week or for the next 50 scans, whichever came first. Each centre was asked to choose the most appropriate dose indicator for their make of CT scanner. Conclusions: There has been a shift to views involving scanning longer lengths of the body in comparison to previous surveys. For example, Abdomen scans and Pelvis scans are being replaced by a combined Abdomen & Pelvis scan; although the local description is often still an Abdomen scan. Also there is the appearance of the whole body Chest, Abdomen & Pelvis scan. This trend is probably technology-driven, as the capability of scanning greater lengths of the body routinely has only become possible with the more recent generations of CT scanner. There has been a shift towards older patients getting CT scans, with proportionately less paediatric CT scans occurring. The mean effective dose per capita per annum has about doubled since the last survey, but the number of scans per capita per annum has not changed greatly. This implies that the increase is due primarily to this shift to procedures scanning longer lengths of the body. The diagnostic reference levels from this survey appear comparable to the reference doses from previous surveys. Introduction: The GEANT4 Monte Carlo simulation toolkit is widely used in the medical physics community. It provides a set of predefined solids that can be combined with arithmetic and boolean operations to build very complex structures. With its capacity to build time-dependent geometries, Geant4 is well-suited for modelling dynamic intensity modulated radiotherapy applications and also in organ motion studies. PET/CT and SPECT/CT have become more and more widely used in Nuclear Medicine and utilised in radiotherapy applications. CT provides the anatomical atomic density distribution in the human body and the SPECT and PET provide the radionuclide activity distribution within the organs at the time of scan. In internal emitter therapy and external beam radiotherapy applications, absorbed dose distribution is necessary to determine the dose to critical organs and hence to avoid organ toxicity. This research investigates whether the patient-specific density and radionuclide activity distribution could be used for accurate calculation of absorbed dose distribution in human body during the Nuclear Medicine imaging scans and internal emitting and external radiotherapy applications. We investigated several methods of converting CT numbers into mass density and elemental weight of tissues for dose calculation with GEANT4 based Monte Carlo simulation. Methods: An optimised fast Monte Carlo simulation was developed using VC++ to simulate particle transport across various atomic densities in the body using GEANT4 (version 9.1.p02) Monte Carlo code. DICOM CT images were used as the voxelized detector and the activity of the radionuclide was distributed in voxels according the pixel values in PET or SPECT images. The mass density distribution in the human body was obtained using a Hounsfield number to material conversion method. A material list according to the density range was built as given in ICRU report 46. A set of new materials was created for the voxels with the same material (within the range) but its mass density is in a different density interval to obtain the correct electron density value. The Hounsfield numbers were established with linear fit to eight intervals and electron densities of the human tissues and were created in 10 bins within a given interval. The relative absorbed energy distribution was obtained using the activity distribution which was estimated from 41 PET images. Total activity of 20 MBq was distributed across the voxels weighted according to pixel values. An image processing algorithm was used for segmenting the boundaries of the organs to calculate the dose distribution. Results: A simulation was carried out with 41 CT images as the detector and 41 PET images for estimating cumulative dose distribution during the period of the scan. Figure 1 shows the density distribution in the 11 th slice, and cumulative absorbed dose distribution in that slice is given in the Figure 3 . Figure 2 shows the activity distribution in the 1 st slice used in this simulation. Nearly 24 hours of computing time was required to simulate 20 million rays on a Windows Vista based 2.1 GHz mobile processor laptop computer with 2 GB RAM. Discussion: In nuclear medicine and radiotherapy applications, an event-by-event Monte Carlo method provides the most accurate dose calculations compared to analytical or kernel based Monte Carlo techniques. For a given pharmaceutical used in nuclear medicine, the activity concentration within the human body varied due to the physiological activities of the human tissues and the functions of the organs. The present simulation is limited to calculating the absorbed dose during the scanning period. The total absorbed dose could be estimated using kinetic models of the organ of interest. Different voxel navigation methods were investigated to suit the best speed and least memory usage so as to simulate a large data set using limited resources. Conclusions: This simulation can be extended for treatment planning systems in radiotherapy, brachiatherapy and internal emitting radiotherapy applications. The activity distribution varies with the physiological activities of the body. For accurate estimation of absorbed dose distribution in the human body resulting from varying activity distributions in organs and tissues, a series of simulations using sequentially acquired PET or SPECTS images is required. In the future research, we will validate the simulation with the published MIRD results. This simulation will provide tools for accurate estimation of absorbed dose in clinical applications. A full function IMRT LINAC was added to this simulation framework to calculate absorbed dose distribution in dynamic IMRT treatments. This will provide the necessary tools to estimate absorbed dose distributions within the tumor volume and also in critical organs in the human body. The IEC 80001 draft standard provides a high-level life cycle process for applying risk management to the integration and management of clinical networks that incorporate medical devices. It defines the roles and responsibilities of those who are involved in the process, as well as some of the key work products that are developed and maintained by the stakeholders. How will this process work in real-world applications though? Two key areas are covered that help those who integrate and maintain these networks understand how 80001 might be applied. The first area is that of network classification -determining the potential risks presented by a given network configuration and then applying the appropriate levels of risk management when integrating new components and maintaining their integrity. Wireless networks pose unique management issues, especially when they are being used to transfer life critical information from mobile medical devices to back end systems: You can't wire around the problem! An overview is provided regarding how 80001 team members The IHE Patient Care Device group is starting it's 4 Cycle of standards-based profile development. It has already had 2 successful Connectathons and Showcase demonstrations at HIMSS conferences in the U.S. and Europe, and is now well into preparations for its 3 rd Connecathon. Major companies have participated including GE, Philips, and Draeger/Siemens; however, few if any products are commercially available today that support PCD profiles. Why? And what will it take to turn the IHE PCD vision for medical device integration from a dream into reality? This presentation provides a look at the current and future IHE PCD development activities, and looks at the last puzzle pieces that need to be added to complete the picture. This will include both anticipated technical components as well as business propositions. "Infrastructure is a hard sell: Concentrated cost with diffuse benefit" (Dr. Brailer); however, developments in national health programs and the ever spiraling costs of health care, are bringing about the demand for integrated solutions and broad application of health information technologies. Will this be enough though to reach the tipping point for deployment of open standards-based device connectivity? , Ultrasound Imaging Systems, pathology analysers, patient monitoring systems and others, which are used across the Health environment. This is increasing the need for a responsive support network to provide ICT services to the clinical environment. Biomedical Technology Services (BTS) currently do not have the resources to maintain databases, administrate web services, write interfaces to other systems, administer clinical information systems and to provide integrated ICT infrastructure to an enterprise environment controlled by another department. To bridge the gap between the clinicians and the Information Division, Biomedical Technology Services has initiated a formal working relationship with the Information Division to provide increased support in the clinical environment. Methods: A needs assessment was conducted with clinicians and biomedical staff to determine the business requirements with regards to the installation and support of IT based biomedical systems. A risk assessment was then completed to highlight past and future problems and possible actions that may prevent the problem. BTS then presented this risk assessment to ID where it was determined a formal partnership was required. Discussions were initiated with the Information Division to determine the best communications path and areas of greatest concern. ID and BTS then created an executive brief that commits the senior directors of both departments to sponsor this relationship. A formal document called the BTS ID relationships paper was then created to determine the terms of reference, address specific issues raised during the consultation phase and any further issues that needed to be addressed. This document was then circulated to all managers within BTS and ID for comment. Results: As this formal partnership has not been signed yet, the full effect of the engagement has not been realised. Some of the related achievements as a result of this engagement have been: x Input into the Queensland Health cabling standard x Inclusion in the planning for ICT infrastructure throughout the state x Inclusion in clinical information system tender evaluations that connect to biomedical equipment x Consulted on ICT work related to biomedical equipment x Access to the ICT call centre database to redirect calls for biomedical equipment that looks like ICT equipment x Greater understanding of the ID processes for quicker and better planned installation of biomedical systems Discussion: Currently the BTS ID relationships paper has not been signed and it is difficult to realise the full benefits of this arrangement. Some of the related benefits have started to be realised as shown by the above results. Although this partnership is in its infancy it shows great promise at the upper management level that it will be supported. The basis of the BTS ID relationships paper is that BTS will manage all front line maintenance on biomedical systems including the system's LAN. The ICT department will assist when asked and have responsibility for the WAN. Common communication paths are to be provided for when either party modify's any ICT infrastructure. Although this partnership is in its infancy, it has been shown there are greater high level benefits to this type of arrangement. Without such a partnership the engagement between BTS and ID would continue to be adhoc leaving the clinicians to project manage the resources of BTS and ID with no overall plan to integrate with future projects. The BTS ID relationships paper was designed to make the biomedical technician's work easier when implementing and maintaining biomedical systems. Although much of this has not been achieved yet, what has been proven is creating this relationship has given the BTS a greater ability to be involved in the ICT planning. The initiation of this partnership has increased the flow of information between these departments allowing them to plan the implementation of new services instead of being reactive. Peter MacCallum Cancer Centre, Melbourne, Australia Introduction: Modern infusion pumps incorporate sophisticated safety software with the aim of reducing medication errors. Many of these pumps also have two-way wireless communication capabilities. The wireless communication is very useful to the biomedical engineer in the management of these devices. We describe the installation of a new fleet of infusion pumps and resulting benefits in equipment management. The entire fleet of general purpose infusion pumps at Peter MacCallum Cancer Centre was replaced with 150 Hospira Plum A+ pumps over a period of 1 week. Prior to the change, extensive in-service training of the nursing staff was carried out. The Plum A+ pumps incorporate "MedNet" dose error reduction system which aims to reduce infusion errors. The software allows for libraries of drugs and doses which are specific to designated clinical areas. For a specific area, the library contains a list of common drugs with their normal maximum and minimum dose limits. Soft limits can be overridden, after a warning, as particular circumstances require, but hard limits cannot be exceeded. The wireless network, previously installed in the hospital, was available for pump communication. Each infuser incorporates a wireless 802.11 a/b/g module communicating over the 2.4GHz band with WPA2 encryption. Each pump needed configuration for communication with the network. Asset numbers and serial numbers were loaded at this stage to associate with the MAC address of each unit. Another task, which took some time, was to map the geographical areas in the hospital with the MAC addresses of the wireless access points (WAP). The MedNet software and drug dose library are maintained on a dedicated server. The drug library was prepared by nursing and pharmacy staff. The drug dose library was loaded into the pumps via the network. Discussion: The system implemented allows updated drug libraries to be downloaded to the pumps as required and provides web access to a great deal of information. When new drugs are to be made available, or drug dose protocols are revised, a new drug library can be down loaded, remotely, to all pumps which have wireless communication. The library is downloaded on command, but it is not put into active use until accepted by the nurse at the end of the current infusion. The information from each pump, which is communicated back to the server, provides numerous reports of activity. Pumps report back at about 5 minute intervals. This reporting, in real time, assists in: x Physically locating the last access point of pumps that require maintenance. x Managing inventory of active pumps and those which have been removed for maintenance or repair. x Monitoring an event log and error log for each pump. x Checking the infusion status before transferring a new drug library. The ability to communicate with the fleet of infusion pumps in real time enables the biomedical engineer to monitor the error log in each pump at any time and, importantly, to identify the location of a pump which may be due for programmed maintenance. Unit utilization in the various clinical areas is another indicator which is provided by the system. The stomach has an electrical pacemaker, like the heart, and propagating electrical activity that initiates and coordinates muscular contractions. Dysrhythmic gastric electrical activity (GEA) is associated with common and highly symptomatic conditions such as gastroparesis and functional dyspepsia. Non-invasive recording of GEA by electrogastrography (EGG) is unable to determine abnormal propagation of GEA or electrical uncoupling. Endoscopic measurement is limited by the inferior conductance of GEA through the inner mucosal layer. An alternative is to measure GEA directly from the stomach's outer serosal surface. Our aim was to develop a minimally invasive approach to the measurement of GEA and to demonstrate its validity in a porcine model. Methods: Device design: An array of five closely-spaced electrode heads were constructed from 300 micron silver wire set in a 5 mm diameter epoxy resin platform. The five electrodes joined stainless steel connecting wires, which were set into a laparoscopic sleeve layered with Teflon and silicone glue, followed by a cable tube of silicone over copper shielding, and finally to a bayonet jack and socket connector set. It was designed for repeated sterilisation by ethylene oxide. In-vivo validation: Two weaner pigs underwent general anaesthesia and midline laparotomy. Simultaneous recordings were obtained from cutaneous EGG, a high-density 4x8 electrode array placed on the serosal surface of the gastric antrum, and the novel laparoscopic device, the tip of which was positioned 25mm proximal to channel 32 of the high-density array on the serosa of the gastric corpus. All data was acquired through the ActiveTwo System (Biosemi) and filtered using a 2 nd order Bessel filter with a cut-off frequency of 2 Hz. The five channels from the laparoscopic device were averaged and compared with the EGG and high-density array recordings. Timing of a GEA event (slow wave) was established by point of the most negative gradient. Validation of true slow wave recording was performed by event concordance with the two other recognised methods, and detection of appropriate 'lag' (slow wave velocity is approximately 4-5mm/s for this region of porcine stomach; corpus measurement should precede antral measurement). Results: GEA was immediately and continuously recorded by all three methods with good concordance (Figure 2 ). An appropriate lag of 5.1s was measured between slow waves registered from the novel laparoscopic device (corpus) and channel 32 of the array (antrum). Discussion: Our novel laparoscopic device (we have labelled it the 'Lammer's Wand') effectively recorded slow waves with acceptable quality. Further distal shielding might reduce noise to the low level of the high-density array. The only previously described minimally invasive device for recording GEA required traumatic penetration of electrodes into the seromuscular layer of the gastric wall. This device is atraumatic. The device will make it possible to laparoscopically investigate the nature, prevalence and clinical importance of GEA dysfunctions, and in the future may find therapeutic application alongside gastric electrical stimulation therapies. Conclusions: A new device for the laparoscopic measurement of GEA is presented. It has undergone in-vivo validation in a porcine model. Introduction: The Two-Part Nail (TPN) is a new intramedullary device for the fixation of pathological fractures due to tumours in long bones. Unlike traditional intramedullary (IM) nails the TPN is inserted in two halves proximally and distally through the fracture site which has already been opened to remove the tumour. This makes creation of another insertion point unnecessary, reducing surgical trauma. The two nail halves are connected in vivo by the surgeon and reinforced by a PMMA bone cemented joint. Bone cement is routinely used to fill the defect caused by the tumour. The design of the connector is the focus of this project. A pilot study was devised to determine whether the stiffness required to provide pain relief could be achieved through the support of the cemented joint in isolation regardless of connection design. This would determine whether connection stiffness or simplification of surgical procedure would be a primary design objective. This study further investigated the effect of circumferential grooves on the mechanical properties of the cemented joint. Methods: Three groups of five samples each were tested. The test samples were constructed from 10mm Delrin rod with three milled, longitudinal grooves as per the current one-part nails. The test sample groups were; x Full length rods (225mm) with no join x Half-length rods (112.5mm) with plain ends connected with a cemented joint x Half-length rods (112.5mm) with circumferential grooves around the rod ends connected with a cemented joint The cement mantles were manufactured by syringe injection of CMW 3 PMMA bone cement into a mould. The bone cement was mixed by hand without a vacuum mixer. The cement mantle formed around the connection was 30mm long with a uniform radial thickness of 2mm. The samples were tested in torsion over 10 cycles to a strain of 5 degrees and back to zero strain, with a loading rate of 1 deg/s. This allowed the recording of torsional stiffness as well as conditioning the samples to approximately simulate physiological loading. The samples were then tested in four-point bending (inner span 50 mm, span ratio 3:1). Loading was at 0.5mm/s up to 22mm deflection, the maximum possible in this rig without impingement. The cement sprue was oriented to point upwards (as shown in fig. 1 below) in order to minimise the effect of the sprue on the mechanical properties of the cement mantle. Results: There was a significant increase in stiffness for plain cemented samples over solid rods in torsion (13% mean difference P=0.00006) and bending (12% mean difference P=0.003). However there was no significant difference in stiffness either in torsion (P=0.868) or bending (P=0.743) between the plain and grooved cemented samples. The grooved samples were stronger than the plain samples (16% mean difference, P=0.0003), but both were weaker than the solid rods which did not fracture at the maximum possible deflection. The joined rods failed by fracture of the cement mantle, on the upper or tensile side of the rod. Discussion: The most important parameter for these implants is the stiffness in torsion and bending, as stiffness leads to pain relief. Because it is not expected that these implants will fail catastrophically, the absolute strength of the connection is relatively unimportant. As these rods are used for palliative surgery in patients with limited life expectancy, fatigue life is a lesser consideration. The applicability of these results depends upon the surgeon's ability to form a cement mantle of sufficient thickness and uniformity. Further studies to determine the effect of poor cement technique are planned, along with Finite Element Analysis of connector designs and the effect of varying cement mantles. Conclusions: It has been shown clinically that full length Delrin rods are stiff enough to provide pain relief. These results then indicate that the TPN with a cement mantle is stiff enough to provide pain relief. Because of this the primary design criterion becomes simplification of surgical procedure. As the circumferential grooves do not increase the stiffness of the nail and the strength is relatively unimportant, the additional manufacturing complication they cause precludes their use in the design. Introduction: Diamond is a potential detector material for dosimetry applications; it is radiation hard, chemically inert and non-toxic, and has near-tissue equivalence (Z = 6, c.f. Z § 7.3 for tissue). Diamond detectors are commercially-available 1 . These detectors utilise natural diamonds specially chosen for their properties, contributing to the high price and long lead times associated with purchase of these detectors. In contrast, synthetic diamond is significantly cheaper and its properties, which can be tailored during synthesis, are more reproducible. However, synthetic diamond still suffers from defects/trap states, which give rise to undesirable device characteristics, such as the need to prime the device (expose the device to a certain dose prior to use as dosimeter); for example see references [2] [3] [4] [5] . Methods: Prototype detectors have been fabricated from a variety of commercially-available synthetic diamond films grown using chemical vapour deposition. These devices have been exposed to 6 MV photon beams from a Varian 600C linear accelerator. A Farmer 2570/1 dosimeter and a Keithley 6430 source meter unit have been used to measure the detection characteristics of these devices. Measurements have been taken using a Perspex build-up cap and using a solid water phantom; device performance has been compared to an ionisation chamber. Results: Current values up to a few nanoamperes have been measured for typical bias voltage levels (100-250 V). Figure 1 (a) shows the amount of charge integrated over 10 seconds using the Farmer 2570/1 dosimeter for photon beams of different dose rates; the device has not been primed, is surrounded by an air cavity inside a Perspex build-up cap, and is biased at 247.75 V using the Farmer dosimeter. Discussion: An increase in the current through the detector (and hence the amount of charge integrated) is obvious when the detector is exposed to the photon beam. However, the dark current appears to be reducing with exposure of the detector, and there is an initial overshoot when the photon beam is turned on with an exponential decay before reaching a steady state. Increasing the dose rate increases the current in the detector; the photocurrent appears to obey a power law equation in agreement with theory 6 (see fig. 1 (b)). The detector shows no significant angular dependence, as shown in fig. 1 (b); 0 degrees indicates the detector face-on to the photon beam with irradiation through the positively-biased contact, 180 degrees face-on through the negatively-biased contact, and 90 degrees with the detector edge-on. Conclusions: Diamond demonstrates potential for use as a viable dosimeter. However, there exist several issues, such as the need for priming, which need to be resolved. Modern techniques in a clinical environment such as radiation therapy demand superior instrumentation in order to assure that the radiation that patients receive is optimal. A promising material for radiation detection is the use of diamond, primarily due to its near-tissue equivalence (carbon), and that it is chemically inert, non-toxic and highly resistant to radiation. Unfortunately, the advantages of natural diamonds are offset mainly by their high cost as well as poor reproducibility due to the scarcity of suitable gems of consistent quality. Recent progress in synthetic diamonds obtained by chemical vapour deposition (CVD) poses to eliminate such dilemmas. The focus of the current project is to evaluate the suitability of CVD diamonds as radiation detectors and to validate them for use in a clinical setting. Methods: Several prototype detectors have been fabricated in order to test various device parameters relevant to clinical dosimetry. Perspex build-up caps and wax were used to encapsulate the metallized films and these were irradiated in a Solid Water phantom using a 6 MV photon beam from a Varian 600C Clinac. Relative dose linearity and sensitivity of response were investigated to quantify characteristics and comparisons against a 0.6 cc Farmer ion chamber in a Solid Water phantom. Photocurrents were typically integrated over small time intervals using a 2570/1 Farmer Dosemeter. A Keithley 6430 SourceMeter using LabVIEW was also used for a variety of other, more precise measurements. Results: Fig. 1 illustrates some results from a 200-μm polycrystalline film with a sensitive volume of 0.63 mm 3 that was exposed to 6 MV photons at 2.5 Gy/min at 90° or an "edge-on" configuration. Fig. 1(a) shows both the measured current as well as leakage (dark) current versus cumulative dose. Fig. 1(b) illustrates the resulting net average current from Fig. 1 (a) as well as average sensitivity versus cumulative dose. Current and sensitivity were averaged over 2.4 second intervals. Dependence on dose rate, incident angle and applied electric field for film thicknesses of 100, 200 and 400 μm have also been investigated but are not illustrated here. Discussion: Exponential saturation of dose was observed during repeated exposures. Priming of about 40 Gy was needed in order to achieve some sort of stability, but was expected given the overall quality of the sample. The exponential rise or decay of sensitivity also changed in behaviour and magnitude as a function of applied electric field, where different rise times and an initial "overshoot" of photocurrent were observed. The relationship between the net average current I and dose rate D followed a known power law equation I D D ' . Dependence on incident angle was found to be insignificant. investigate if a modified double exposure technique can be used to "calibrate" radiochromic film by using two uniform sensitivity exposures before and after the measurement exposure. Proposed Procedure: Four 10x12.5cm pieces of EBT film, cut from the same sheet, will be exposed to a series of dose distributions from a 6MV photon beam produced with a linear accelerator. The film will be placed at 100cm SFD in the center of a 30x30cm solid water phantom with 10cm underneath and buildup of 1.5cm, centered in a field size of 30x30cm. After a suitable waiting period the exposed film will be scanned using an Epsom Expression™ 10000XL flatbed scanner, the image registered and processed with Matlab™ giving a matrix of optical densities. The film will then be exposed with the next radiation distribution. The first and third uniform exposures will be used to calculate the sensitivity of the film, which is in turn used to calibrate the second "unknown" exposure. A similar method described in the AAPM report 63 p2103 will be followed. The resulting "corrected" matrix will then be compared to the "uncorrected" matrix (no sensitivity corrections) and the standard double exposure technique (one sensitivity correction), using statistical tools to assess image uniformity. The unknown exposures for the four pieces of EBT will be a uniform exposure for one, and dose distributions commonly used in radiotherapy QA for the others: a junction, a spoke shot, and an IMRT fluence map. Expected Results: Figure 1 shows the results of previous study of the double exposure technique applied to Gafchromic® MD-55-2 (a first generation radiochromic film); there was a dramatic improvement seen in the film uniformity using this technique which should be reproducible in EBT. However, a second sensitivity exposure will detect any non-linearity in response over the dose range. Introduction: Out-of-field radiation is important in terms of its potential for late effects such as radiation induced carcinogenesis, heart disease, respiratory problems and many others. Intensity modulated radiation therapy (IMRT) typically employs a greater number of monitor units than conformal therapy, potentially increasing such risks. IMRT delivery at the William Buckland Radiotherapy Centre (WBRC) employs a BrainLAB m 3 ™ mini-multileaf collimator (MMLC) in sliding window mode with a static jaw field opening of 9.8 x 9.8 cm 2 , mounted on a 6 MV Varian 600C. Treatment planning systems (TPS) are normally commissioned using measured data that extend only a few centimetres beyond the field edge, with penumbra defined as 80 % to 20 % of the maximum dose for the field. Dose extending outside the field is not intended to be used for the overall calculation of the dose distribution or contribute to the inverse optimisation procedure. Therefore, one would expect the dose distributions predicted by the iPlan TPS to be inaccurate in regions far from the primary field. In this study we compare doses determined by iPlan to measurements and Monte Carlo calculations of out-of-field doses corresponding to a range of fields shaped by the MMLC. Methods: Dose points were measured with IC3 and IC13 ionisation chambers (Wellhöfer, Schwarzenbruck, for smaller and larger fields, respectively) at various points in a water chamber (positioned asymmetrically) up to 45 cm from the isocenter. A range of field sizes were employed along with variation of other parameters, such as source-surface distance and depth in water, so as to determine the factors of greatest influence and establish relationships with dose. The doses at these positions were also calculated with BrainLAB iPlan. A model of the Varian 600C with mounted BrainLAB MMLC was modelled using BEAMnrc, and dose points at the same locations were calculated in a voxelated water phantom via Monte Carlo radiation transport. Results and Discussion: Figure 1 shows the relative dose for a range of field sizes up to an off-axis distance of 45 cm, as measured with IC13 and IC3 ionisation chambers. The TPS calculates dose points such that for the 2.4 x 2.4 cm 2 , 5 x 5 cm 2 and 9.8 x 9.8 cm 2 fields, the 0.1% isodose lines extend to 5, 10 and 20 cm respectively. The measurements show dose values falling from 0.1% at 20cm to slightly more than 0.01% at 45cm. Therefore, the TPS effectively does not take into account 0.01% dose outside 10cm of the isocenter for small fields. For a 20 Gy hypo-fractionated treatment, dose to peripheral regions is on the order of cGy which is a significant dose in radiological protection. Initial Monte Carlo results generally agree with these results. There is a predicted discrepancy between the out-of-field doses calculated by iPlan and those found experimentally. Out of field doses raise questions of treatment optimisation, particularly in cases where patients have long prospective times for secondary effects to be manifest such as in the treatment of paediatric patients or those without a primary malignancy. Introduction: Endorectal balloons are utilised for prostate cancer external beam radiotherapy in a number of institutions as a means of immobilizing the prostate and reducing the volume of the rectal wall receiving high doses [1, 2] . The use of endorectal balloons during treatment delivery has been shown to decrease the delivered rectal dose, leading to decreased toxicity [3] . Air is used to fill the balloon to a volume of 60cm 3 at the University of Wisconsin hospital. This gives a toroid shaped air cavity immediately adjacent to the target with a diameter of approximately 5cm and a thickness of 3cm. This volume of air will perturb the dose distribution in the surrounding tissue. In this study the perturbation of the dose distribution due to the endorectal balloon was measured using radiochromic film for a conventional IMRT plan and a helical tomotherapy plan. The results were compared with commercial radiotherapy planning systems. Methods: An 8x8x16cm 3 acrylic phantom was constructed to match the external contours of an EZ-EM balloon catheter. This phantom was then placed between two halves of a TomoTherapy 'cheese' phantom to simulate a pelvic anatomy. A planning CT scan was taken and a hypothetical prostate PTV, bladder, femoral heads and the balloon cavity were contoured in the Pinnacle radiotherapy planning system (RTPS) (Philips Radiation Oncology Systems, Fitchburg, WI). A seven field IMRT plan was created. The CT data and contours were also exported to the TomoTherapy Hi-Art RTPS (TomoTherapy Inc. Madison, WI) and a helical tomotherapy (HT) plan was created. The prescription in both cases was 70Gy in 28 fractions. Sheets of Gafchromic EBT film were cut and placed in the sagittal plane through the middle of the PTV and the air cavity. The films were then scanned on an Epson Perfection V700 flatbed scanner and analysed using ImageJ and MATLAB software. Calibration was performed using 5 th order polynomial curves generated from calibration films measured on the TomoTherapy and Varian 2100EX machines. Results: Absolute dose maps were created from the digitized film images. Profiles were taken through the PTV and the air cavity on the film. These were compared with corresponding profiles in the respective RTPS-calculated dose cubes. The results are shown in. The balloon cavity was seen to perturb the dose distribution. This was modeled by both RTPSs to varying degrees of accuracy. The Pinnacle RTPS was seen to over-predict the dose at the anterior edge of the cavity by 5% and under-predict the dose at the posterior edge of the cavity by 20%. Similarly the TomoTherapy RTPS over-estimated the dose at the anterior edge of the cavity by 5% and under-predicted the dose at the posterior edge of the cavity by 42%. In both cases the measured dose to the target volume at the cavity edge was at the prescription dose within error limits. The air cavity created by an endorectal balloon perturbed the dose distribution. This is modeled by the Pinnacle and TomoTherapy RTPSs to varying degrees of accuracy. The effect of the perturbation will be reduced due to blurring of build-up and build-down regions adjacent to the cavity from the multiple beam angles. Flinders University, Adelaide Australia Editor, APESM Introduction: This talk is aimed at research students (in particular), their supervisors and others who wish to publish their work. The session is intended to be interactive with comments and discussion from the floor welcome. It will canvas the opportunities for publication that may arise from a research project. How to respond to reviewer comments will form part of the discussion. All students will produce a literature review for their thesis. This review, along with the identification of gaps and deficiencies in recent work and recommendations for future work should be considered for publication. Some students will investigate a new technique or methodology which (along with some preliminary results) would be suitable for publication. Partition your work into discrete sections then publish them progressively or sequentially. The abstract will normally be freely available on the internet and some readers will read only the abstract, so summarise all major aspects of the paper. Include the major results and limitations, give numbers if you have them. Avoid writing "….are discussed" or "…are described" (ie avoid saying nothing). Instead actually state/describe what you found out. Discussion: If you wrote the article, you should be first named (and corresponding) author. Ensure that all of the co-authors have read and commented on the manuscript (that's what they are for). Ask particularly for comments on parts of the manuscript that you are not confident with. The peer review of your work will help with your thesis. Your thesis will be looked upon more favourably if several papers have been published from it. The scrutiny that precedes publication will improve your work as many weak points or deficiencies will be noticed during peer review. Your published papers may form the basis for several chapters of your thesis. ARANZ is a private sector innovation company based in Christchurch, that develops and exports medical devices to a variety of markets overseas. Established in 1995 by four people with backgrounds in medical science and engineering, ARANZ has grown to a staff of nearly 50 people, with additional outsourced services and contract manufacturing done in Canterbury. Our first product was FastScan, a handheld 3D laser scanner that is mostly used in orthotics and prosthetics, and our latest product is Silhouette, a more compact scanner for use in wound care and documentation. We also have a geophysical data modelling software division that has sprung off from initial scanner data processing research. In this talk, you will hear about many of our experiences along the way and some of the issues we have faced, such as the innovation process -knowing what to work on; the marketing process -finding initial users and selling in numbers; regulatory issues and designing for manufacturing; IP protection, time to market and financing it all; and finding staff and growing the company. Based in Christchurch, New Zealand, Enztec is a designer and manufacturer of surgical devices for the orthopaedic industry. Its products include custom implants for tumour and trauma surgeries and instrumentation based around Hip and Knee replacements, and are sold throughout the world by the largest orthopaedic companies. Enztec's success has been based around being different -both in design and service delivery. It is a goal to make great looking product that enables a surgeon to produce a better result for the patient. Through its products and service Enztec's goal is to change people's lives. Through being fast, flexible and listening to customers Enztec has been able to carve out a boutique niche in the orthopaedic world. Headquartered in Christchurch, Enztec was formed in 1993 and has 36 staff. It is privately held. Enztec is New Zealand's only orthopaedic company and holds an ISO 13485 quality certification. In 2008 Enztec won "Deal of the Year" in the Canterbury Export Awards. The New Zealand Government granted funds for commissioning a service utilising a mobile operating theatre to provide on site day surgery for rural New Zealand. It now regularly visits 20 rural hospitals and has provided over 9000 day surgical cases since 2002. In addition to sharing facilities funding was also provided for the sharing of knowledge using video communication. Both projects were implemented in 2002 and the government contract was extended for a further 5 years in 2006. A good idea or ideas is never a guarantee of success especially in the medical world and we briefly reflect on old stories of subterfuge and outright attempts at sabotage all adding to life's rich tapestry. Ehsan Samei Introduction: An ethical and efficient use of x-ray medical imaging requires optimization of its operation so that patient radiation dose is not higher than necessary and the images provide the most efficient depiction of relevant clinical information for which the images are acquired. Methods: Optimization is approached within two distinct frameworks: developmental optimization and operational optimization. The former is the task of the manufacturer while the later falls within the domain of clinical medical physics. Operational optimization involves multiple aspects of imaging including x-ray beam quality, radiation dose, acquisition geometry, and image processing. This lecture aims to define the role of optimization in x-ray imaging, and provide methods and several examples on how it can be implemented clinically. Examples include applications in breast imaging, chest radiography, digital tomosynthesis, and computed tomography. Conclusions: Optimization enables a clinician to take full advantage of applicable resources in terms of dose, time, medical information, and cost, enabling an effective and efficient integration of medical imaging systems within the clinic. Breast cancer is the most common cancer among women in the Western World, is on the increase in males and is the leading cause of cancer-related death in females in Australia. Early detection is the best protection for breast cancer. Currently mammography is the gold standard screening technique but it misses 15-20% of breast cancers because they have low X-ray contrast to the surrounding soft tissue, it is unable to adequately detect breast tumours close to the chest wall and upper arm and suffers from a high false positive rate, with only 10% of biopsies being malignant. Recently, microwave imaging for breast cancer detection has gained attention due to advances in imaging algorithms, microwave hardware and computational power. The breast is relatively translucent to microwaves, accessible for imaging, and there appears to be significant electromagnetic contrast between tumours and healthy tissues. The method is attractive to patients because both ionizing radiation and breast compression are avoided, resulting in safer and more comfortable exams. Microwave breast tumour detection has the potential to be both sensitive and specific, to detect small tumours in the early stages of development and possibly able to determine whether a suspicious area is malignant or benign. It is also quick and less expensive than methods such as MRI and nuclear medicine. Recent developments in passive, hybrid, and active approaches to microwave breast cancer detection and imaging will be reviewed. An overview will be given of our novel work in applying microwave imaging diagnostics developed in fusion plasma research and finite difference time domain based inverse methods to microwave breast cancer detection and imaging. Introduction: Monitoring of the relative sensitivities of the heads of a multi-head gammacamera is not always included in quality control schedules. A quantitative comparison, using statistical parametric mapping (SPM), of brain SPECT from two groups of normal controls acquired on the same 3-head gammacamera but 3 years apart showed significant unexpected differences. We have shown elsewhere that the distinctive asymmetry of these changes could be simulated by imposing in one group a 10% decrease in sensitivity in one of the three camera heads (L Barnden. IEEE NSS-MIC Conference Record 2007). Here we describe detection and correction of inter-head sensitivity differences in individual scans. Methods: Relative sensitivity changes between the three heads within individual patient scans were detected from analysis of the variation with acquisition angle in the total decay-corrected counts. This variation is caused by differential attenuation at different angles. Its curve should be smoothly varying and should be closely approximated by the sum of its first few cosine basis functions. To estimate the sensitivity differences between camera heads, we applied a downhill simplex iterative adjustment of the sensitivity of each head to minimise a cost function -the mean absolute difference between the adjusted curve and the fitted sum of its first 3 (or 4 or 5) cosine basis functions. The method was validated by simulating sensitivity changes on noisy constant sensitivity projections and applying this technique to measure them. An initial survey of inter-head sensitivity differences was performed over 7 years of SPECT acquisitions on the same 3-head camera. The sensitivities thus obtained were also used to correct the acquired SPECT projections and, for the same two groups of normal controls, their reconstruction was repeated and SPM analysis performed to detect any residual differences. Results: We show that, for typical SPECT acquisitions, this method can reliably detect sensitivity differences of 1% between heads. The review of scans since 2000 revealed typical maximum inter-head sensitivity differences of 4%. In one period of 2 months, this increased to 25% without being detected in the clinical scans or by routine QA. In another period of 9 months, scans acquired within days of each other showed maximum sensitivity differences varying by a factor of 3 and in some scans the sensitivity in one head appeared to drift within the scan. SPM analysis of the sensitivity-corrected normal controls from two different periods showed the differences between them had been eliminated. Discussion: Inter-head sensitivity differences in multi-head gammacameras, when each head scans a different arc of the acquisition circle and when the sensitivity differences drift, can have serious consequences in quantitative SPECT. We have developed a method that accurately detects small changes in inter-head sensitivity in individual SPECT scans on multi-head gammacameras, and that permits correction of the acquired scans. The method eliminated a significant SPM difference between two groups of scans. This method promises to be a useful QA tool for quantitative multi-head SPECT. Introduction: Laser speckle contrast analysis is a useful and convenient non-contact method of generating dermal perfusion information. The information relates to the flux of red blood cells into the dermis. It is closely related to optical Doppler measurements using fibre probes, but unlike that technique, can produce images of skin areas, rather than point measurements of perfusion. The method dates to the late 1970's, when photographic images were used, but developments in lasers, digital cameras and computer technology over recent years have encouraged speckle methods to be revisited. Inexpensive and convenient systems, capable of producing perfusion images in real time have been developed. The IRL laser speckle imaging system uses a CCD camera, a thermo-electrically cooled diode laser and a PC running custom software. Speckle contrast C is measured for 5 x 5 pixel squares using the accepted definition: where ı s is the spatial variance and ‹I› is the mean of the intensity. The speckle contrast is reduced with higher dermal blood flow. Results: Speckle contrast measurements generally have a pulsatile flow component. By excluding gross movements using control measurements, and as the laser speckle at visible wavelengths is dominated by near-surface tissue, we can confirm that we are measuring pulsatile flow in the dermal capillaries. Figure 1 shows a speckle contrast record measured through the toenail. The pulse is clearly visible in this record, though there is also a random noise component. By establishing the pulse positions in the record using a filtered version of this signal, we can overlay a number of pulses and calculate a statistical average pulse shape. Figure 2 shows example mean pulse shapes for two people. The flow is consistently repeatable for Subject 2 and significantly different for Subject 1. To produce a perfusion index comparable to laser Doppler flowmetry, we use a multiple-exposure technique. The perfusion index produced is proportional to the concentration and mean velocity of the red blood cells in tissue. Figure 3 shows an example of the Perfusion Index (P.I.) measured over 5 minutes following vasodilation induced by hot water. Conclusions: Speckle contrast analysis is now sufficiently fast to measure the waveform of pulsatile flow in the dermal capillaries, something which may have clinical applications. Measures show differences in the pulse waveforms from different subjects, leading to the possibility that pulsatile flow profiling will be a useful measurement. Progress has been made in quantifying the perfusion information in speckle images, making it more comparable to that from established Doppler probes. Aaron Introduction: Hyperglycaemia is prevalent in critical care due to the stress of condition, even without a previous history of diabetes. Tight glycaemic control is associated with significantly improved patient outcomes. However, providing tight control is difficult due to evolving patient condition and interactions with common drug therapies resulting in recurring hyperglycaemic episodes. Quantifying the impact of drug therapies on blood glucose and metabolic control would enable optimised delivery of these drugs and facilitate tight control. This research quantifies the impact of common steroid and inotrope drug therapies on a clinically validated measure of metabolic function -insulin sensitivity -over the critical first 2 days of ICU patient stay. The goal is to quantify the impact of these common drug therapies on the ability to provide tight control and thus balance these affects clinically. A clinically validated, model-based measure of insulin sensitivity (S I ), is used as the marker of metabolic function. Blood glucose, insulin and nutrition data from 53 hyperglycaemic subarachnoid haemorrhage patients who received steroids (dexmethasone) and/or inotropes (noradrenaline) in the Christchurch ICU from 2003-2007 were used to determine S I over the first 48 hours of stay. This cohort possesses little co-morbidity so changes in S I can be assumed due to the drug therapies alone. Patients were classified on outcome (survivors, non-survivors) and therapy (steroids, inotropes, both or neither). S I is compared over 48 hours for each of the 8 patient groups. Results: Four main results emerged: 4. Insulin sensitivity increases gradually for all patients over the 48 hour period (p < 0.05) 5. Insulin sensitivity was significantly suppressed in survivors given steroids (versus survivors not given steroids), by a factor of approximately 2x (p < 0.005) 6. Inotropes had no significant affect on the change in S I over time regardless of outcome 7. Insulin sensitivity was always suppressed in all non-survivors regardless of drug therapy. Discussion: Suppressed insulin sensitivity appears to be a consistent marker of mortality and it is of particular concern that steroids cause similar suppression. These results may also indicate the reason that many studies on steroid administration report mixed reports with respect to mortality. Finally, a reduction of 50% in insulin sensitivity for patients on steroids will also have a significant impact on the tight glucose control regime and effort required to maintain euglycaemia, creating a much more difficult clinical control problem. Conclusion: Model-based metabolic markers (S I ) can provide significant insight into the clinical impact of common critical care drug therapies. The initial results from this study will begin to enable clinicians to optimise the tradeoffs between tight glycaemic control and the metabolic impact of steroid and inotrope drug therapies. Introduction: A significant number of patients admitted to the Intensive Care Unit (ICU) require some form of respiratory support. In the case of Acute Respiratory Distress Syndrome (ARDS), the patient often requires full intervention from a mechanical ventilator. ARDS is also associated with mortality rate as high as 70%. Despite many recent studies on ventilator treatment of the disease, there are no well established methods to determine the optimal Positive End expiratory Pressure (PEEP) ventilator setting for individual patients. A model of fundamental lung mechanics is developed based on capturing the recruitment status of lung units. The model produces good correlation with clinical data, and is clinically applicable due to the minimal number of patient speci¿c parameters to identify. The ability to use this identi¿ed patient speci¿c model to optimize ventilator management is demonstrated. This minimal model also provides a clinically useful, simple platform for continuous monitoring of lung unit recruitment for a patient. The main bjective of this research is to develop the simplest possible model that is also clinically eěective. The model presented represents the lung as a collection of lung units. A lung unit corresponds to sets of distal airways and attached alveoli. The lung is divided into several "horizontal" compartments to simulate different level of superimposed pressures. The compartment at the bottom experiences higher superimposed pressure than the ones above due to the weight of the lung. Recent studies suggest that recruitment and derecruitment is the dominant cause of volume change, rather than isotropic, "balloon like", expansion of alveoli as had been traditionally thought. The model developed thus consists of lung units with only two possible states: recruited or not recruited. The recruitment and derecruitment of the modelled lung units are controlled by the distribution of Threshold Opening Pressure (TOP) and Closing Pressure (TCP), respectively. Threshold pressures are assumed to follow a normal distribution along pressure based on studies in the literature. Once a lung unit is opened, it assumes a volume defined by a unit compliance curve. The unit compliance is based on a sigmoid curve. A total of four variables are used to capture the essential features of the measured pressure volume curves: TOP distribution mean and standard deviation, and TCP distribution mean and standard deviation. These parameters are effectively two each for the inflation and deflation limbs. Other variables, such as PEEP, PIP, and tidal volume are assumed known as they are set by the clinician or can be obtained directly from the ventilator. Results: The model is validated by fitting and predicting with clinical PV data of 4 patients at different PEEP levels. The TOP and TCP parameters were fit parametrically. Data from diěerent PEEP settings of the same patient were ¿tted by shifting the distribution mean value, while other parameters were ¿xed. Prediction is done by using data from 2 PEEP settings to predict the third, or by using data from 3 PEEP settings to predict the fourth. For the four patients the overall average absolute error in the pressures varied from 15.92 ml (1.81%) to 20.65 ml (3.41%) for inflation and 36.63 (4.08%) to 41.06 ml (7.18%) for deflation. Note that the prediction only occurred in the steady portion of the curve to avoid the transition regions. Discussion: The shifting of the means of the TOP and TCP normal distributions while keeping standard deviation fixed, gave good matches to the clinical data. A mean shift represents the eěect of the dynamic mechanism of lung units at diěerent PEEP values. More speci¿cally, once a collapsed lung unit is recruited, it does not necessarily collapse again at the same pressure at which it was recruited. Instead, it stays recruited at a lower pressure. This eěect is especially signi¿cant in the ARDS lung because of the reduced number of functional lung units and lower compliance of the overall lung. The bene¿t of recruitment manoeuvres on ventilated patients is based on this dynamic. The methods also allowed a good prediction of lung response to data not used in the identication. Therefore, there is a potential to use the model to trial and test various PEEP settings before application to the patient. Another benefit of this approach is to provide constant monitoring for a patient's level of lung recruitment, and thus the level of ARDS and the impact of therapy as the patient's condition evolves. Conclusion: A minimal model of the mechanics of a ventilated lung is developed. It employs only 2 unique parameters for each limb of the breathing cycle. The model was validated by fitting to clinical data and predicting lung response to various PEEP settings. These initial results show that the model could be used to both monitor a patients condition and predict PEEP therapy response. Longer term this approach may lead to improved management of ARDS in critical care. Introduction: Opioids, commonly used for post-operative pain relief, can cause respiratory depression in 0.1-1% patients with subsequent hypoxia. Timely detection of this loss of airway tone would be useful in preventing damage to these patients. The major effect of opioids is directly on the central nervous system with decreases in sympathetic activity, reduction in parasympathetic activity and vagal tone. Specifically for this study, opioids affect respiratory drive & rhythm; decreased activity in motor neurons driving respiration is modulated by ANS respiratory centres. Respiratory modulation is reflected in heart rate changes creating the possibility that the opioid effect could be measured by monitoring heart rate variability, HRV. HRV uses changes in heart rate to indirectly observe changes in the activity of the autonomic nervous system and in particular, the changes in activity of the parasympathetic and sympathetic branches. Methods: This pilot study investigated the ability of very short term (30 sec) heart rate variability to predict the occurrence of loss of airway tone in patients administered with fentanyl pre-operatively. While many HRV indices are correlated, this investigation used the major HRV indices (e.g. SDNN, rMSSD, pNN50), and Poincare indices that are known not to be correlated with these major indices. Results: Loss of airway tone could not be predicted using the major HRV indices, nor with uncorrelated Poincare indices. Fentanyl causes respiratory depression through modulation of the ANS respiratory centres however very short term measures of HRV are not sensitive enough to these modulations to predict the loss of airway tone. Introduction: Brain disorders can decrease a person's ability to perform the physical and cognitive functions necessary for safe driving. A computerized battery of sensory-motor and cognitive tests (SMCTests) has been developed comprising tests of visuoperception, visuomotor ability, complex attention, visual search, decision-making, impulse control, planning, and divided attention. This study investigated the power of binary logistic regression (BLR) and nonlinear causal resource analysis (NCRA) models to classify blinded on-road pass or fail in a large group of people with brain disorders referred to the Driving and Vehicle Assessment Service (DAVAS) at Burwood Hospital. Methods: Two-hundred referrals to DAVAS with brain disorders were recruited and their performance on SMCTests and a blinded on-road assessment determined. Referrals had definite or suspected brain disorders, comprising stroke (n=61), agerelated substantive cognitive decline (n=55), dementia (n=43), traumatic brain injury (n=21), Parkinson's disease (n=8), brain tumour (n=3), and other brain disorder (n=9). Figure 1 shows the SMCTests apparatus at Burwood Hospital. Forwardstepwise BLR and NCRA predictive models based on SMCTests performance were developed to classify blinded on-road driving performance for the whole referral group and for different diagnostic subgroups. Results: Both BLR and NCRA models were able to classify on-road pass or fail for the whole referral group with an accuracy of 69.5%. Greater accuracy could, however, be achieved by splitting the referrals into two groups: (1) Dementia (dementia or age-related substantive cognitive decline) and (2) Non-dementia (all other brain disorders). The BLR models classified on-road driving outcome as pass or fail with accuracies of 76% (Dementia) and 75% (Non-dementia), while the NCRA models had accuracies of 77% (Dementia) and 80% (Non-dementia). The Discussion: For both dementia and non-dementia groups, NCRA identified the same measures as predictive of on-road driving as BLR but was able to identify and use additional measures to improve accuracy. NCRA appears better able to accommodate outliers due to it being a non-linear modelling method based upon individual performance-limiting impairments. Measures of attention (complex and divided) were important for predicting on-road driving in both diagnosis groups but were most critical in the dementia group. This indicates a greater emphasis on attention deficits, rather than physical function, in the dementia group. In the non-dementia group, prediction of on-road driving was most accurate with assessment of a broader range of sensory-motor and cognitive functions. Conclusions: Predictive models which emphasize assessment of attention deficits are more accurate for predicting on-road driving in people with dementia. Predictive models which include a broader assessment of sensory-motor, cognitive, and coordinated sensory-motor and cognitive function are more accurate for predicting on-road driving in people with a variety of other brain disorders such as stroke and traumatic brain injury. However, while SMCTests provides useful data regarding areas of sensory-motor and cognitive dysfunction in people with brain disorders, an on-road assessment is still required to make a final decision regarding on-road driving safety. Further research is required to identify other factors which underlie inability to drive safely in people with brain disorders and which may increase the accuracy of predictive models. Other factors must be either (1) sensory-motor and/or cognitive deficits not currently being fully assessed or (2) unrelated to sensory-motor or cognitive functions (e.g., attitude, confidence, insight, driving skills, road code knowledge). Lan Methods: This new device can measure and store force and ROM measurements simultaneously. The device has known accuracies of ±1º for angle and ±1N for force. An LCD touch screen displays the measurements, the patient's name, the joint, the side of the body, the movement of the joint, and the contraction dynamics. For this research, only the concentric flexion of the right elbow was tested. To assess the validity of the HHD, a mechanical arm was developed that produces repeatable profiles of strength versus ROM. The arm consists of two linkages, to represent the upper arm and the forearm, connected by a pinned joint at the elbow. The upper arm is attached to a rigid frame with a ball joint to represent small movements of the shoulder during elbow flexion. Several important landmarks, such as the acromion, lateral epicondyle humerus, olecranon and radial styloid, were placed on the arm for identification purposes. A load cell was used as part of the forearm to measure the force applied at the wrist and a potentiometer was attached to the elbow to measure accurately the relative angle between the forearm and the upper arm. This arm simulator was driven by a pneumatic cylinder to mimic the flexion movement of the elbow. The maximum ROM was 130º and the maximum strength at the wrist was limited to 120N. The mechanical arm was able to simulate three different profiles, each different in ROM and strength. In the test, each profile was randomised and repeated three times by the Biodex and five times by the participating physiotherapist. Results: Figure 1 shows the torque versus joint angle for the three profiles measured using the HHD and the Biodex. The results show good agreement between the HHD and the Biodex. Discussion: Small differences in the start and end ROM were partly due to the different methods of supporting the upper arm by the physiotherapist and the Biodex. More research is needed to assess the inter-reliability of the HHD, and its validation with real subjects in a clinical trial. The measurements taken with the HHD are in good agreement with those measured using the Biodex, which suggests that the device may be a useful tool for clinical assessment of strength and joint ROM. Elyse Passmore 1 , Geoff Frawley 2 , Paul Junor 1 and Peter Taunton Introduction: Non-invasive pulse oximetry is well-established and in regular clinical use. It aims to provide medical staff with continuous information on the patient's arterial blood oxygen saturation (SpO 2 ) and heart rate. Since the early 1990's pulse oximetry has been a mandatory international standard for monitoring during anaesthesia but there are limitations on the availability and reliability of conventional transmission pulse oximetry in some circumstances 1 . This study examines methods for the elimination of false or inaccurate readings in pulse oximetry due to poor peripheral perfusion. Kyriacou, 1999 suggests a more central monitoring site such as the oesophagus, as it will remain adequately perfused during periods of cardiovascular stress. The aim of the study is the development of a fibre-optic oesophageal pulse oximeter prototype for use in paediatric and neonatal patients. We believe this device will have potential applications in major abdominal and thoracic surgery, congenital cardiac surgery and children with major burns to the peripheries. A major focus of the project is on the theoretical design and development of a suitable fibre-optic probe. The oesophageal probe is based upon reflectance pulse oximetry, comprising of optical fibres coupled to a red (660nm) and infrared (940nm) light source and a photodiode. The probe is accompanied by signal conditioning circuitry for detection and pre-processing of the red and infrared photoplethysmograph (PPG) signals, before digitisation for transfer to a laptop computer, see figure 1. Results: Preliminary studies indicate the feasibility of using fibre-optics in pulse oximetry. However several hurdles have been encountered including, relatively small reflected signals, also the use of fibre-optics limits the area of tissue illuminated and the usable area of the photodiode. Discussion: Previous attempts have been made to develop an oesophageal pulse oximeter; however none have investigated the use of fibre-optics to achieve the miniaturization needed for use in neonatal and paediatric use. The study's focus is on the design and development of a fibre-optic oesophageal probe for SpO 2 monitoring in paediatrics and neonates. Although generally reliable, pulse oximeters do fail in patients undergoing prolonged surgical procedures. The two processes --locating and targeting tumors --are somewhat independent and in principle different implementations of these processes can be interchanged. The accuracy of integrated systems range from 1-2 mm. Advanced localization and targeting methods have an impact on treatment planning, and also present new challenges for quality assurance (QA), that of verifying real-time delivery. Some methods to locate and target moving tumors with radiation beams are currently approved for clinical use --and this availability and implementation will increase with time. Extensions of current capabilities will be the integration of higher order dimensionality into the estimate of the patient pose and realtime reoptimization and adaption of delivery to the dynamically changing anatomy of cancer patients. Locating and targeting moving tumors with radiation beams will facilitate improved conformality of dose distributions with temporally changing anatomy. Integration and implementation of techniques to locate and target moving tumors are ongoing. The Large Hadron Collider (LHC) is the largest scientific instrument ever built. 27 km of super conducting magnets cooled to just above absolute zero are used to accelerate counter rotating protons to 7 TeV in each direction, the energy needed to explore the region where the elusive Higgs boson is expected to be found. Particle bunches containing about 10**9 protons are bought into collision at the rate of 40 MHz producing around 1000 particles every 25 ns. Large experiments have been build around the collision points. A key component of most of the experiments are the hybrid pixel detectors which are at the heart of the particle tracking systems. These detectors combine pixellated Si sensors with high performance CMOS chips to 'photograph' each bunch crossing and provide selected 'images' for event reconstruction. Through the Medipix project the hybrid pixel technique has been adapted to X-ray imaging and other applications. X-ray photons are detected and counted one by one and noise-free images can be generated. Moreover, as the X-rays are processed one at a time, it is possible to bin them by energy leading to the generation of colour X-ray images. This may lead to a new generation of X-ray imaging devices and permit novel diagnostic techniques in to be developed in medicine. The presentation will explain how the development of hybrid pixel detectors for High Energy Physics has led to the development of a new kind of imaging device for medical and other applications serving as an introduction to the Medipix Workshop. Examples of existing applications will be shown and some perspectives on future developments provided. Introduction: The most important function of the respiratory system is the exchange of oxygen and carbon dioxide -moving oxygen in and moving carbon dioxide out. The Oxygen Cascade is a useful way to look at many aspects of Respiratory Physiology. It describes the steps involved in moving oxygen from the atmosphere all the way to our cells, where oxygen is utilised. During this talk, I will focus on these steps and relate them to how we anaesthetise and monitor patients. Atmosphere to Alveoli: The partial pressure of oxygen (pO 2 ) in dry atmospheric air at sea level is 160 mmHg (21%). Higher concentrations of oxygen are usually administered during general anaesthesia, and are measured using a fuel cell or paramagnetic analyser. During normal inspiration, air is drawn into the lungs by increasing the intrathoracic volume and making the intrathoracic (and therefore alveolar) pressure subatmospheric. The situation is completely different during anaesthetic or ICU ventilation where positive pressure ventilation is usually used. During inspiration, the atmospheric air is warmed and humidified in the upper airways, resulting in a reduction in the pO 2 to 150 mmHg. During anaesthesia, a heat and moisture exchange filter (HMEF) is used to reduce heat and water loss. By the time the air gets to the alveoli (the tiny air sacs where gas exchange takes place), the partial pressure of oxygen has dropped to approximately 100 mmHg, mainly because of dilution by carbon dioxide which is being transferred from the blood into the alveoli. The pO 2 in the alveoli cannot be measured directly but can be estimated using the alveolar gas equation: P A O 2 = P i O 2 -P a CO 2 /R where P A O 2 = pO 2 in alveoli P i O 2 = pO 2 of humidified inspired air P a CO 2 = pCO 2 in arterial blood R = respiratory quotient, approximately 0.8 Alveoli to Blood: Movement of oxygen from the alveoli to the blood is by simple diffusion. This occurs rapidly because, in the human lung, there are about 300 million alveoli closely associated with tiny blood vessels, resulting in very small diffusion distances. In the normal lung, the pO 2 drops from 100 mmHg in the alveoli to 95 mmHg in the arterial blood. This is the "A-a gradient" and may be massively increased in lung disease. There are a number of reasons for this gradient, including shunt (mixing of venous deoxygenated blood), mismatching of alveolar ventilation and blood flow, and diffusion problems. An example of abnormal shunt that can occur in a ventilated general anaesthesia patient results from atelectasis or closure of small airways. Inspired oxygen is unable to reach the collapsed alveoli and therefore the blood next to these alveoli remains deoxygenated, resulting in an overall lowering of arterial pO 2 . During ventilation, atelectasis can be prevented or treated by the use of positive end expiratory pressure (PEEP). The delivery of oxygen to the tissues depends on adequate blood flow and adequate oxygen content in the blood. Most oxygen is bound to haemoglobin. Each haemoglobin molecule reversibly binds four oxygen molecules and the amount of binding can be quantified by measuring oxygen saturation. Continuous monitoring of arterial oxygen saturation by pulse oximetry has contributed hugely to improving anaesthetic safety during the last twenty years. The pO 2 can also be measured in the laboratory using a blood gas machine (Clark or oxygen electrode). The transfer of oxygen from the blood to the cells is analagous to the situation in the lungs. Diffusion is rapid because of very small distances and there is rapid equilibration between capillary blood and the interstitial fluid surrounding the cells. The pO 2 at this point is approximately 40 mmHg. Oxygen then diffuses intracellularly to the mitochondria (pO 2 5-20 mmHg), the "power stations" of the cell where oxgyen is used to make the energy-rich molecules that keep us alive. Trevor Ackerly, Janet Droege, Craig Lancaster, and Kathleen Roxby William Buckland Radiotherapy Centre, Alfred Hospital, Melbourne, Australia Introduction: The William Buckland Radiotherapy Centre (WBRC) at the Alfred Hospital recently installed the BrainLab ExacTrac Frameless stereotactic system, an integrated treatment system which extends from imaging, through planning to treatment. The additional components in the treatment room comprise an infra-red detection system, blue-tooth connection, video camera, a pair of fixed oblique in-floor mounted kilovoltage X-ray tubes, matching roof mounted detectors, and a tiltable patient support, termed the robotic couch. The imaging process involves a mask based fixation and registration system at the CT scanner. The iPlan treatment planning system automatically recognizes the registration and uses it to relate the isocentre position to the CT data when exported to the ExacTrac computer on the treatment unit. At the treatment unit, ExacTrac provides four independent, and successively more accurate, corrections or checks of the isocentre position. Firstly, the patient is prepositioned to the isocentre by an infrared positioning sensing system. Secondly, this initial positioning is checked with a target positioning array against the lasers. Thirdly, automatic planar image fusion between in-room kilovoltage images and CT-derived DRRs gives a 6D correction. A manual review of the planar image fusion provides the final confirmation before treatment. Methods: A rando phantom was CT-scanned with the Head and Neck Localizer and Target Positioner in place. The images were imported into iPlan and an isocentre was added. The CT images and isocentre were then exported from iPlan to ExacTrac. The phantom was then positioned for treatment with the use of the ExacTrac system, including kilovoltage imaging and 6D correction and verification. Demonstrating the clinical practice being modeled, Figure 1 shows the ExacTrac in-room user interface at the point where patient prepositioning has been achieved and the user is asked to activate the kV image correction process. Figure 2 shows a patient DRR used in fusion with a patient kV image, an example of which is given in Figure 3 . After correction, the position of the isocentre was determined using a modified Winston-Lutz test. An electronic level was used to test the accuracy of the couch tilt. Commencing with the conclusion of the first fractionated treatment, the summaries automatically recorded by ExacTrac detailing the 6D corrections for each fraction were reviewed. A PTW 77334 1 cm 3 ion chamber was used to measure the dose required to obtain a kilovoltage image. Results: On the basis of segmented phantom models of intracranial treatments, the accuracy of isocentre positioning relative to bony anatomy for intra cranial treatments was determined to be 0.7 mm (with 1.25 mm CT slices). The tilt of the couch was found to be within the specified accuracy of 0.2 degrees. The first treatment summary review showed an average correction angle of 0.3q, with a maximum correction of 1.0q. The dose to the patient surface from a single exposure was of the order of 0.1 mGy for the standard verification exposure at 80 kV 100 mA 100 ms. Discussion: The minimum slice thickness the CT is capable of is 0.625 mm, but it was found that this did not improve the accuracy of isocentre positioning. Although the pitch angle of the couch was found to be within tolerance, for intra cranial treatments the robotic couch pivot point is about 170 cm from the isocentre. An average pitch angle correction of 0.3q corresponds to a height change of 9 mm at isocentre, 1q corresponds to 3 cm. The 0.2 degree tolerance for the robotic couch pitch angle does not amount to a 6 mm tolerance in the positioning of the isocentre because the infrared positioning system is used to drive the isocentre to the correct height after the robotic couch has pitched. The tilt mechanism can in this sense be thought of as a requirement to implement 2D fusion, rather than an objective. ExacTrac is an efficient and accurate method for implementing image guided frameless stereotactic treatments. Intra-cranial treatments have commenced, and the trigeminal neuralgia program will shortly move to the frameless system. The clinical program is also about to extend to extra cranial spinal treatment. Introduction: Treatment of lesions in the lung with radiotherapy requires particular treatment margins to account for breathing induced tumour motion. These margins may be reduced if real-time tumour tracking methods are applied, i.e. if the tumour position was known at all times during treatment the relative position between the tumour and the beam could be adjusted. Tracking can potentially be done in two different ways: either the aperture of the beam is continuously adjusted to the current tumour position by means of dynamic motion of individual leaves of the multileaf collimator or the patient is shifted continuously relative to the stationary radiation beam. With our adaptive tumour tracking system (ATTS), which we are in the process of developing, we pursue the second approach. The ATTS is designed to detect and track the tumour in real-time. In this work we concentrate on the tracking part only, which aims to adjust the patient position during irradiation. Patient position correction is achieved by means of adjusting the position of the treatment table during irradiation. Therefore the behavior of the robotic HexaPOD table (Medical Intelligence, Schwabmünchen, Germany) was investigated in terms of its dynamic capabilities but also limitations. A 4D phantom capable of simulating tumour trajectories was developed consisting of an industrial 6-axis robot (MELFA Industrial Robot, RV-1A Series, Mitsubishi Electric, Ratingen, Germany) to which an arm containing a tumour surrogate was attached (see Figure 1 ). This phantom was able to simulate real patient tumour trajectories in 3D space with sub-millimetre accuracy. Optical markers were attached near the tumour, which were tracked with an infrared (IR) tracking camera (Polaris, NDI, Waterloo, Ontario, Canada). Several trajectories, ranging from simple sinusoidal motion with various amplitudes and frequency to real patient tumour motion, were fed to the robot. The tumour position, determined by the IR camera, were sent to the developed control system, which aimed to counter steer the tumour movement such that the tumour remained stationary in space. The maximum speed and acceleration of the HexaPOD varied for the different directions and ranged between 8-9.5mm/s and 29.5-34.5 mm/s 2 , respectively. For the real patient tumour trajectories, an average reduction of tumour motion to 68% of the original amplitude was observed. It is interesting to note that all baseline drifts of the mean tumour position were completely compensated for by the system. This is illustrated in Figure 2 for one of the patient tumour trajectories. This initial study has shown that it is indeed feasible to dynamically compensate for breathing induced tumour motion in the lung with the HexaPOD table. For the current set-up, which included dedicated non-clinical firmware provided by Medical Intelligence, the limitations in terms of maximum speed and acceleration were determined. It is technically possible to use different actuators to increase both the maximum speed and acceleration of the HexaPOD. However, in terms of patient comfort and reliability of the ATTS its main application might be in correcting for baseline drifts, which we have shown to be feasible with the current design. Introduction: Radiotherapy of the intact breast often requires the use of beam modifiers to achieve a homogeneous dose distribution within the breast tissue. More recently beam modulation techniques including forward planned segments and intensity modulated radiotherapy have been used to achieve a uniform dose distribution. The proximity of the breasts to the chest wall subjects them to respiratory induced motion. In this study respiratory motion was modelled (in 1-D) in a solid water phantom and the dose distribution for various sized multileaf collimator (MLC) apertures measured. The effect of respiratory motion on the dose distribution for a breast radiotherapy patient was determined by performing a 4DCT radiation therapy treatment planning (RTP) study. Several breast radiotherapy techniques were assessed for their sensitivity to patient respiration. Methods: An Anzai motion phantom was coupled to a cuboid solid water phantom, the direction of motion was perpendicular to beam incidence. Dose profiles were measured with radiographic film positioned at d max. in the phantom. Dose profiles were measured for 1, 2, 10 cm wide apertures with the phantom under motion conditions of static, sinusoidal, and a more complex respiratory type waveform involving a power cosine motion. A convolution kernel was created to model the effect of motion and verified against the measured data. A range of motion amplitudes and the effect of random setup errors were simulated. A radiotherapy treatment planning study was also performed using both 3DCT and 4DCT patient data. On the 3DCT data motion was mimicked by introducing isocentre shifts and assigning each isocentre a separate beam set, the beam weights were set according to a probability density function that described the motion. A 4DCT planning study was performed with patient motion inherently accounted for within the data set. The patient respiratory trace taken at the time of CT acquisition was modelled and a probability density function extracted, the dose from each phase of the breathing cycle was weighted according to this function and then summed to provide the total dose accumulated over the entire breathing cycle. Standard tangents, forward planned multi-segment treatment, inverse IMRT, and a non-coplanar technique were analysed. Plans were compared using dose difference maps and dose-volume histogram analysis. Results: Motion with amplitudes typical of patient respiration were found to have a large effect on the dose distributions measured in the phantom, as shown in Figure 1 . "Static" and "Anzai Motion" are measured distributions, "Patient Trace" and "2cm Motion" are convoluted static dose profiles derived from a patient motion trace. Amplitudes of motion that were comparable or larger than the aperture width produced the largest dose variation compared to the static dose distribution. The modelling of random setup errors in the presence of respiratory motion also revealed considerable perturbation of the dose distribution. In the planning study dose variation resulting from respiratory motion was concentrated in regions of high dose gradients. This can be seen in Figure 2 where the same plan was generated on the 50% exhale phase reconstruction, being representative of a 3DCT data set, and on the multi-phase 4DCT data set. The image is a dose subtraction of the two plans in a sample slice for 2Gy prescription. The film dosimetry and convolution modelling illustrate the need to understand the implications of patient motion on the dose distribution when using a segmented delivery technique for the tangential irradiation of the breast, particularly when specifying the restrictions to aperture size. The planning study showed that different planning techniques vary in their sensitivity to motion and will be dependent on the number and direction of beams, the magnitude of motion, and the placement of shielding. The impact of patient respiration on the dose distribution must be considered when determining the size and shape of apertures to be used in a segmented radiotherapy treatment of the breast. Commissioning with phantoms does not necessarily imply that 4D scanning will be successful on patients who exhibit irregular breathing cycles. We wish to determine the limits of variability in breathing beyond which scans will be unacceptable, and recommend these for clinical decision making Methods: A standard CT density phantom (Nuclear Associates) with test inserts was used to measure the CT numbers during both normal 3D and 4D scanning. Doses were measured in various phantoms during an extended length scan using both Farmer ion chamber (IC) (Nuclear Enterprises) and TLDs (Harshaw), and compared with the scanners' calculated CTDI. Two motion phantoms were used -an in-house constructed phantom (sawtooth pattern) as well as the Quasar Respiratory Motion Phantom (Modus Medical Devices). The ability of 4DCT phase reconstruction was evaluated by varying breath per minute (BPM), period, and mode (sinusoidal and pseudo exhale-inhale pattern). Results: No significant differences in CT number were found between the standard 3D scan and the 4DCT (individual phase data sets, untagged scan). The reconstructed volumes of the geometric phantom inserts matched the manufacturer's stated dimensions, and the observed range of these structures closely tracked their known motion. For a standard thoracic 3D scan, the doses measured in the Quasar Phantom by the IC and TLD were 28.8mGy, and 28 ± 3mGy respectively. The CT scanner issued a CTDI of 21.4mGy. For an equivalent 4D scan, the IC dose was 41.1mGy and TLD dose was 41 ± 6mGy, compared to a stated CTDI of 30.5mGy. It was decided that for 4DCT, scans the DLP (dose length product) should be limited to about 1.5x the typical DLP chest scan value of 450mGy*cm . The resultant images made with these parameters were deemed adequate for our needs. 4DCT scans of the Quasar motion phantom demonstrated that the reconstruction process provides images that accurately separate out data corresponding to each particular phase. The paper will present further results of variations in the amplitude and timing of irregularities introduced to the phantom's breathing pattern. We have confirmed for our scanner that spatial and temporal integrity, CT number, noise, and dose are acceptable for our chosen method of 4DCT. Subsequent in-service presentations, workshops and documentation has facilitated the training of physics and RT staff. By varying the phantom motion during 4DCT scanning we were able to demonstrate that unacceptable artefacts in the phasereconstructed CT data sets do occur, but that appropriate tagging of the breathing cycle can lead to satisfactory results. Careful identification of peak inspiration on the breathing trace for our inaugural patient, who exhibited regular breathing, yielded successful 3DCT and 4DCT scans for planning. S Introduction: Conventional analytic approaches to margin calculation for conformal and/or intensity modulated radiation therapy (IMRT) make various assumptions about the statistical properties of the motion of the target volume, and the nature of the treatment regimen. Two of the most important assumptions generally made are: i) that the variance of the random component of the patient inter-fraction motion is the same for each patient (i.e. that the variance of the treatment error is the same for all patients); and ii) that the treatment regimen can be regarded as consisting of an infinite number of infinitely small treatment fractions. Analysis of historically collected treatment position data for prostate radiotherapy shows that the first assumption does not hold -i.e. some patients move about more than others on the treatment couch. With regard to the second assumption, simulation studies show that a conventional hyper-fractionated course of treatment, consisting of roughly 30 treatment fractions per patient, gives rise to treatment dosimetry that is significantly different from the dosimetry of a hypothetical treatment regimen consisting of a far larger number of extremely small treatment fractions. This work develops a margin recipe that takes into consideration the empirically observed breaches of these two key assumptions. Methods: We regard the standard deviation of the patient position across fractions to be a random variable at the patient level, rather than being constant across patients. Analysis of inter-fraction motion data from a dataset of around 100 patients treated with radiotherapy for prostate cancer shows that this random variable can be fit well by a gamma distribution. Sensible values for the mean and variance of this gamma distribution for a typical clinical scenario are estimated from the same dataset using the method of moments. Given this information, large scale Monte-Carlo simulations of mock patient treatments are performed for patient sets in which the mean and variance of each patient's treatment error standard deviation are selected from a range of values that span the empirical values for these parameters estimated from the prostate data. Treatments are simulated for many patients (thousands), and a margin is calculated for each patient using Marcel van Herk's classic margin recipe with parameter values set such that each patient has a 90% probability of receiving 95% isodose to the entire target volume. The proportion of patients that actually receive 95% of dose to the entire target volume in simulation using this margin is recorded. The simulation process is repeated with the van Herk calculated margin multiplied by a margin adjustment factor (maf), which is allowed to range in value from 0.7 to 1.6. The result is a dataset in which each observation represents a simulated treatment outcome, and contains the mean and variance of the gamma distribution from which the standard deviation of the treatment error was drawn for this patient, the maf applied to the classic analytic margin, and an indicator variable specifying whether the patient did indeed receive 95% of the prescribed isodose to the entire tumour volume (in simulation) or not. This dataset is used to develop a logistic regression model for the probability of receiving 95% of prescribed dose as a function of the gamma distribution parameters and the maf. This model is used to determine the maf required to satisfy tumour volume dosimetry constraints over a range of parameter values of the gamma distribution that describe the patient mean inter-fraction motion. Results: When applying a margin recipe which assumes a treatment regimen consisting of an infinite number of infinitely small treatment fractions to a conventional hyper-fractionated regimen consisting of 30 fractions, the computed margin needs to be increased by roughly 20% for realistic patient data (maf § 1.2). When typical inter-patient variation in mean treatment error is also allowed for, the required margin increases by roughly a further 20% (total maf § 1.4). Discussion: When applying a margin recipe in clinical practice, care must be taken to ensure that the assumptions under which the margin recipe was developed adequately describe the clinical situation in which the margin recipe is to be applied. If this is not the case, then appropriate adjustment must be made to the margin. Conclusions: Various real world breaches of standard margin recipe assumptions render margin adjustment necessary. Introduction: Head and neck cancers, specifically squamous cell carcinoma (HNSCC), often present with regions of hypoxia (low oxygenation). Hypoxia affects the ability of radiation to damage DNA and has been proven to increase cellular radioresistance and decrease tumour local control after radiotherapy (RT). Methods: A Monte Carlo computer model is in development, which aims to simulate individual tumour cell propagation using epithelial cell data including: cell cycle time, stem cell and differentiated cell percentages, and cellular oxygenation. Currently, tumour oxygenation parameters have been taken from Eppendorf probe results from the literature, in terms of partial oxygen pressure at individual points in head and neck tumours. A RT module is also in development, which will simulate fractionated treatment according to any desired schedule, as well as the effects of reoxygenation and accelerated repopulation. Results: Modelling results will be presented in terms of the effect of hypoxia on tumour growth rate, as well as the investigation of the effects of very low oxygenation levels on cell kinetics and corresponding tumour growth. RT simulation results currently show that there is up to a 37 % difference in dose required for local tumour control if varying the levels of hypoxia, accelerated repopulation and reoxygenation within an expected range (see figure 1 for plans to experimentally analyse the oxygenation distribution for a human equivalent radiotherapy schedule (using the Oxylite 2000 measurement system by Oxford Optronix Ltd) in FaDu tumour xenografts in immuno-deficient mice will be discussed. Introduction: Spectroscopic x-ray detectors, such as Medipix, are opening the door to the widespread use of energy selective biomedical x-ray imaging. With dual energy computed tomography quickly becoming the clinical standard, spectroscopic imaging is a likely next step. However to confirm the utility of spectroscopic x-ray detectors there needs to be a clearer indication of the clinical benefits of the technology. Methods: In order to identify possible applications of spectroscopic imaging we conducted a brief literature review of clinical applications of dual energy systems. In addition we analysed simulation results from our own group, our collaborating partners, and industry. This information was coupled with information regarding the clinical significance of diseases and radiology work practices. Results: Broadly, we grouped the benefits of spectroscopic x-ray imaging into three areas; 1) Improved image quality, in particular reduction in beam hardening artefact. 2) k-edge imaging, the identification of high Z-number contrast agents based on their k-edge. This technique will lead to better use of contrast agents. eg. a) less contrast agent required in high risk patients such as in renal impairment, diabetes, and elderly. b) separation of several contrast agents allowing for multiphase studies to be performed with less x-ray dose and less time on the scanner. 3) Improved soft tissue contrast from differences in mass attenuation coefficients of different tissues. The diagnosis of several diseases, such as breast cancer, are known to be improved by dual energy systems. In addition, there is improved soft tissue contrast of normal structures. Discussion: The review suggests there will be significant clinical benefits from spectroscopic imaging across a wide range of radiological problems. In addition, technologies like Medipix are rapidly developing and full body spectroscopic imaging is likely to be technically possible in the near future. However, there exists a "chicken and egg" problem in which the clinical applications can not be developed or confirmed without a working spectroscopic system, but full body spectroscopic systems are unlikely to be built without confirmation of clinical benefits. Conclusions: It was decided that the only way to confirm clinical applications and provide feedback to detector design teams was to build a small 3D spectroscopic system. Our scanner is based on Medipix and dubbed MARS (Medipix All Resolution System). A feasibility study to assess technical difficulties was planned and construction of the scanner began. Tracy Melzer 1,2 , A. P. Butler 1,2,3,4 , N. J. Cook 5 , R. Watts 1 , N. Anderson 2 , R. Tipples 5 Introduction: This study confirms that the Medipix2 photon counting pixelated detector enables spectroscopic bio-medical x-ray imaging of real specimens. The detector provides new, exploitable information beyond the high temporal and spatial resolution of conventional x-ray equipment. 3-D x-ray reconstructions (computed tomography or CT) have revolutionized medical imaging. At present diffraction, interference, and energy information of the incident x-ray photons is largely untapped, leaving much opportunity for advancement. Spectroscopic x-ray information is likely to be the next technological advance in medical imaging, beyond current dual-energy CT. Numerous benefits arise from the advent of spectral CT; namely, a reduction in beam hardening artefacts, increased intrinsic tissue contrast, and a reduction in dose for contrast based protocols. Methods: Medipix2 is a complementary metal-oxide semiconductor with single photon counting capabilities. An extensive calibration process was undertaken to correct for inhomogeneities in the pixel cells due to variations in the electronics layer; the resulting threshold distribution yields a flat field response across the pixel array. Further calibrations including beam hardening corrections, the derivation of a photon energy scale, and charge sharing corrections were implemented to optimize the chip for image acquisition. Single threshold images, energy window images, and dual-energy subtraction images were acquired of the tip of a chicken wing and the left hand of a 20-week-old miscarried fetus. Matlab provided the platform for dual energy digital subtraction of the images. Results: The resulting dual-energy subtracted images display the first spectral images of biological tissue using Medipix2. Both projection and dual-energy subtracted images demonstrate clinical quality (good spatial resolution, high contrast, and low noise) over the 14 x 14mm 2 field of view. Figure 1 shows a projection image of the tip of a chicken wing. Conclusions: This study achieved two principle objectives: Medipix2 extracted energy information from the x-ray beam, facilitating spectral images through dual-energy digital subtraction. The detector also produced clinical quality radiological images. The methods used and images produced form the foundation of spectroscopic x-ray imaging by demonstrating energy selectivity. We believe that spectral CT is an exciting, useful, and feasible direction in which to progress with Medipix. Our group has designed and built a small, spectroscopic CT scanner (MARS) to further explore the powerful capabilities of Medipix2, as well as the forthcoming Medipix3. The scanner will not only record spatial and temporal information, but will also have the added feature of energy information and the benefits associated with spectral imaging. 14mm implants will further advance the field. Instrumented implants will be come intelligent implants. With the advance in electronics and nanotechnology, the goals of orthopaedic implants will be pushed even further. Functional electrical stimulation (FES) is a rehabilitative technology that uses electrical currents applied to the peripheral nerves. The purpose of FES is to provoke contraction of muscles deprived of nervous control, in order to obtain a useful functional movement. This distinguishes FES from purely therapeutic electrical stimulation, which is used predominantly to improve muscle strength, and wound healing, and to reduce pain, spasticity, and joint contractures. The prerequisite for FES is preserved excitability in the lower motoneuron and muscle that is able to contract. At the beginning (in 1970), FES was planned mainly as an orthotic device fro stroke patients, but in later years therapeutic use become more important in comparison to orthotic use. The paper investigates the objective evidence of benefits derived from surface functional electrical stimulation (FES)assisted gait for people after complete and incomplete spinal cord injury (SCI), stroke, and cerebral palsy. The skin is a recognized target for improved vaccines. Particular advantages of the skin strata include: easy accessibility; and, a dense immunoligically-sensitive cell populations. In targeting vaccines to these cells in tightly-defined skin locations, we can achieved radically-improved vaccines -either greater immune responses, or equivalent immune responses (when compared to intra-muscular delivery), with significantly lower doses. In achieving this goal with mechanical approaches, one needs to achieve consistent mechanical penetration of the skin. Several mechanical approaches are under investigation today -a focus of our research group is on micro-nanoprojection patches (MNPs -micron scale needles with nano-scale tips), with thousands of projections dry-coated with vaccine material that breach the skin outer layer and target the underlying immunologically-sensitive cells. For MNPs to achieve optimized mechanical penetration into skin, the interactions of this process must be understood. This is particularly interesting, given the skin is a complex multi-layered bio-viscoelastic material. In this paper, we report on a study investigating key parameters of mechanical penetration of MNPs in skin -namely MNP length, velocity, momentum and energy. To assess the delivery of vaccine antigens to skin with MNPs, we coated the devices with a surrogate generic vaccine fluorescent probe (Vybrant® DiD; Molecular probes, Eugene, Oregon), mixed in a coating solution of methylcellulose and water. This liquid mixture and vaccine was applied and dried to two prototype MNPs, each of several thousands of projections/ cm 2 and with two lengths ~100 μm. These projections are constructed by DRIE techniques. Application of the array was performed using a device consisting a spring with a plunger, to which the array was attached. This device had variable plungers and velocity of impact -allowing velocity, energy and momentum to be studied independently to quantify the key parameters for skin penetration. Two projection lengths were employed so their performance could be assessed. Using C57 B/6 mice the MNPs were inserted into the skin -following which the skin was excised, fixed, cryopreserved, sectioned and examined for fluorescent dye delivery using a confocal laser scanning microscope (Zeiss LSM510 Meta). As one example of results we will present, measured penetration is shown in Fig. 1 , highlighting the delivery location of antigen with velocity and projection length. Projection length was found not to translate into a significant increasing penetration of MNPs into the skin. In contrast, increasing velocity caused a significant increase in penetration. Interestingly, there appears to be a significant proportion MNPs penetrating to the basal layer of the epidermis -which is the epidermis-dermis junction. Further results showing the full velocity, momentum and energy examinations will be presented in this paper. Discussion: Increasing velocity appears to be beneficial for penetration which suggests that there is a strain-rate dependency in the skin -this is in accordance with previously reported skin biomechanical studies. Energy and momentum comparisons will highlight their relative importance to skin penetration in this paper. The peak where the epidermis meets the dermis suggests that there is an effective skin strength discontinuity that will be discussed further -with both an experimental and theoretical analysis. We believe this is the first time this effect has been found in the field of microneedle drug/vaccine delivery to skin. A full analysis of the biomechanics of this interaction from results will be presented. Conclusions: MNP array vaccine delivery to skin is a novel approach to exploit the skin immune system for better vaccines. The approach relies on mechanically penetrating the skin -a mechanically complex material. In an experimental investigation, we have shown mechanical penetration is achieved, and gained a new understanding of the relative effects of projection size, MNP application strain-rate and skin mechanical properties on resultant skin penetration. Introduction: Multi-modality imaging of the breast is becoming incresasingly common, and yet combining the information from the different images is difficult due to differences in the basic imaging physics, geometry, and loading conditions under which the breast is imaged. A key step in moving between these images will be the establishment of an individual-specific biomechanical model of the breast that provides the mapping between the different imaging configurations of the breast. We are creating such biomechanical models and are developing a software platform to incorporate the models into the clinical workflow for breast cancer diagnosis. This paper presents a study specific to predicting breast configurations during MR imaging and ultrasound where gravity loading significantly changes breast shape; for instance, the breast shape changes from when a patient lies prone (on her stomach) for an MR scan to when the patient lies supine (on her back) for an ultrasound. Methods: MR images of the breasts of a volunteer in prone and supine gravity-loaded configurations were acquired using a 1.5T MR scanner. The prone images were used to create a biomechanical model of the Volunteer's breast. The geometry was modeled using hexahedral volume elements with cubic-Hermite shape functions that provide smooth, realistic representations of breast shape. The breast was assumed to be isotropic and homogeneous and its mechanical behavior was modeled using a neo-Hookean stress-strain relationship (W = c 1 (I 1 -3)), where I 1 is a measure of strain (1 st prinicipal strain invariant) and c 1 is a measure of breast tissue stiffness that must be determined for each individual. Using a material parameter estimation technique, we identified c 1 for this individual as 0.08 kPa. A finite element implementation of large deformation elasticity was used to predict breast shape in the supine gravity-loaded configuration. The breast was assumed to be fixed to the chest wall and gravity loading was applied as a body force. The predicted shape was compared to the MR images of the supine configuration by projecting segmented data of the skin surface from the supine images onto the predicted breast surface configuration. Results: The biomechanical model predicted breast shape in the supine configuration with an RMS error of 8.4 mm. An MR image slice of the prone configuration was embedded into the model for the Volunteer's breast and warped to the supine gravity-loaded configuration using the mapping provided by the finite elasticity deformation predictions (Fig. 1) . Discussion: Fig. 1 shows that the model captures gross characteristics of breast shape in the supine configuration. These models are now being incorporated into a software platform. Fig. 2 shows how a sphere in a prone configuration (gold lines) of the breast changes to a squashed oval (green) in the supine configuration. The ability to warp a medical image from one configuration to another enables clinicians to track tissue shape and location when a patient is oriented in different positions via transformations calculated using the laws of physics. Conclusions: A software platform and biomechanical modeling framework is under development to enable clinicians to track the location and shape of suspicious breast lesions across different breast configurations, imaged using different modalities, such as MRI and ultrasound. The biomechanical model predicts gross shape characteristics, and such mapping between image views and orientations will assist clinicians in breast cancer diagnosis. Clarice Field Keywords: Fixed Partial Denture, remodelling, finite element, mechanical stimulus Achieving a greater understanding of functional/biological response to mastication of bone within the mandible is an integral component of dental biomechanics and thereby a basis for design of dental prosthesis. This investigation analyses the potential biological response as a consequence of the stress loading of the mandible pre-and post-installation of a Fixed Partial Denture during mastication. This study develops a three-dimensional finite element analysis (FEA) model for the determination of the stresses generated upon loading teeth in the pre-molar to molar region of the mandible. The alterations in stress/strain fields between pre-bridgework and Fixed Partial Denture cases are investigated to predict how the bone would remodel under mastication load. Mandibular bone undergoing mastication loads incorporating a FPD experiences greater stress/strain magnitudes than that prior to bridgework. The bone supporting abutment teeth undergoes higher stresses/strains than their natural state, which indicates bone remodelling in terms of Frost's mechanostat analysis. The primary function of the lung is gas exchange. Through reciprocating ventilation and diffusion of ambient air, the lung adds oxygen and removes carbon dioxide from the venous blood. Efficient gas exchange relies on the entire cardiopulmonary system, including the conducting airways, acinar airways, pulmonary perfusion, respiratory muscles and neurological drive. The purpose of lung function testing is to determine the efficiency of gas exchange, and identify any impairment in this process by disease or injury. There are many different tests of lung function that either measure singularly, or complementary components of the gas exchange process. Simplified, lung function can be broken down into the efficiency of the reciprocating bellows (as measured by spirometry, lung volumes, etc) and the efficiency of diffusion (as measured by DLCO, arterial blood gases, etc) [ Figure 1 ]. Values that fall below the lower limit of normal are likely to have lung disease. Respiratory function tests categorise lung disease into either obstructive (emphysema, bronchiectasis, chronic bronchitis, asthma etc), restrictive (diffuse parenchymal lung disease, chest wall disorders, muscle weakness) or a mixed (combination of obstructive and restrictive) pattern. Lung function results are used in conjunction with patient history, physical assessment, imaging and histological assessment to assist in the diagnosis and management of lung disease. Objectives: 1. To review imaging techniques routinely used to image the respiratory system in particular chest radiography and computed tomography (CT). 2. To review the capabilities, advantages and disadvantages of commonly used techniques with respect to demonstrating normal anatomy and diseases of the respiratory system. The emphasis will be on multidetector row CT. 3. To gain an understanding of the limitations of diagnostic imaging if interpreted in isolation from clinical information. 4. To provide an over view of advanced/experimental techniques for imaging the respiratory system. Michael Hlavac 1 Introduction: The respiratory system serves to maintain gas exchange which is an essential requirement for normal human physiological functioning. Disorders of the respiratory system have the potential to impair normal gas exchange, and thus have major implications for health. Methods: The aim of this talk is to provide a general overview of common respiratory disorders. This will include disorders of the airways (asthma, COPD), the lung parenchyma (infectious and interstitial lung disorders) and the pulmonary vasculature. Aetiology, presentation and treatment will be discussed. This talk will be given in association with a review of radiology of the respiratory system to allow clinical correlation with radiological images. Conclusions: It is envisaged that this talk will provide a broad overview of the scope of respiratory disorders commonly encountered in primary and secondary care. In IBA, we Protect, Enhance and Save Lives. This is achieved through various future developments for the IBA Proton Therapy System. Such developments include: 1. 6-degree Robotic Patient Positioner 2. EmPath Gantry design 3. Dual Incline Beam Line Treatment Room 4. Carbon Accelerating Superconducting Cyclotron 5. Superconducting Isocentric gantry 6. Others This presentation introduces and explains the details of IBA's future developments in its Proton Therapy System. Abstract: The use of protons for radiation therapy offers theoretical advantages compared to external beam photon radiotherapy, proton therapy enables lowering of the integral dose to the patient due to the finite range of protons. The basis for the reduction of integral dose is the proton's Bragg peak which increases the dose deposited in the tumour while reduces that to the normal tissue at the distal side of the target volume. Protons also have a demonstrated advantage for treating small tumour volumes at shallow depths such as tumours of the eye and of the CNS such as chordomas and chondrosarcomas. Proton radiotherapy reduces the volume of normal tissue exposed to low doses, which is clinically significant with respect to the risk of second malignancies. That risk is notably more pronounced for younger patients than older ones as younger patients are more at risk to future radiation induced cancers. However, proton therapy is less tolerant than photon therapy to uncertainties in both treatment planning and treatment delivery. For example, tissue inhomogeneity has a greater effect on proton dose distributions than on photon dose distributions. In planning proton therapy, the density of tissue along the proton path must be precisely determined and accounted for in order to obtain the required proton energy distribution to achieve the planned dose distribution in the patient. Failure to allow for a zone of higher density could result in a near zero dose in a distal segment of the target volume due to the reduced range of the protons. In contrast, for photons, because of their different energy loss processes, an increased density would cause only a modest lowering of the dose distal to the higher density region. Conversely, neglecting to account for an air cavity upstream of the target volume would, for proton beams; result in a high dose being deposited in distal normal structures while only a modestly increased dose would be deposited in the case of photon beams. Furthermore, motion and mis-registration of the target volume with the radiation beams have far more severe consequences in proton therapy compared to photon therapy. If the target volumes are to be adequately irradiated, and adjacent OARs are to be protected in proton therapy, it is essential: that the causes and possible magnitudes of motion and mis-registration are understood; that their possible consequences are understood; that measures are taken to minimize motion and misregistration to the extent possible and clinically warranted. It is almost impossible to get rid of all uncertainties in radiation therapy. Therefore, it is important to understand the sources of these uncertainties quantify their magnitude and develop mitigation and/or minimization strategies. In proton therapy these uncertainties arise from several sources that include; dose calculation approximations, biological considerations, setup and anatomical variations, and internal movements of low and high density organs into the beam path. Organ motion also has a major impact on the proton range, which is managed by adding a distal safety margin. These margins reduce the benefit of proton therapy in treatment sites where the physical properties of protons could make a significant difference, such as lung cancer. The focus of this presentation is to; a) understand the potential sources of dosimetric uncertainties in proton therapy b) evaluate the impact of these uncertainties on the accuracy and conformity of dose delivered to patients and c) suggest potential strategies that translate physical advantage of proton therapy into a maximized dosimetric benefit in the patient. Introduction: Proton therapy is an important innovation in external beam radiation therapy. Through the benefits of the Bragg peak a highly conformal dose may be delivered across a tumour volume. Such a dose-distribution can minimise the occurrence of normal tissue complications whilst maximising effective tumour control. For quality assurance (QA) purposes it is important to be able to accurately characterise a proton therapy beam. A measure of the absorbed dose along the beam does not provide sufficient biological information. An alternate method of characterisation is the microdosimetry approach. The microdosimetry approach infers the radiobiological properties of the beam by measuring the energy deposited by the beam in a micron sized volume. This allows the changing radiobiological effectiveness (RBE) of the beam along the Bragg peak to be considered. Traditionally gas proportional counters have been used as the detector of choice in microdosimetry. These detectors have the advantage of excellent tissue equivalency of the gas but also suffer from some well documented short comings. In particular, they are unsuitable for proton therapy QA purposes as the large physical size of the sensitive volume (SV) limits the spatial resolution achievable in measurements. Silicon detectors possess a truly microscopic SV volume and thus exhibit a far superior spatial resolution. They also address several of the other problems associated with gas detectors. Recently a silicon detector with a planar SV has been developed at the University of Wollongong's Centre for Medical Radiation Physics (CMRP). This device has been successfully tested and used to obtain biological data at various proton therapy institutes including: Loma Linda University Medical Centre (LLUMC), California, and Massachusetts General Hospital (MGH), Boston. Novel in-field and out-of field measurements have been performed. These have led to a clearer understanding of a proton beam's RBE at various points within a patient. Studies have identified that the performance of the device may be improved upon by modifying the geometry of the SV from planar to cylindrical. Through the CMRP's collaboration with the University of New South Wales (UNSW) and the Australian Nuclear Science and Technology Organisation (ANSTO) such a cylindrical device has been fabricated and is currently being tested. Methods: The charge collection characteristics of the new cylindrical silicon detector structure were experimentally determined via an ion beam induced charge (IBIC) study. This was performed using the Australian Nuclear Science and Technology Organisation (ANSTO) heavy ion microprobe. The amount of energy deposited within the microdosimeter for each ion traversal, ǻE, was measured with a standard charge sensitive preamplifier, shaping amplifier and MCA in coincidence with digitized voltage signals of the beam position, x and y for each event in ǻE. Data triplets (x,y,ǻE) were saved for each event in a list mode file. Analysis software used these files to generate IBIC imaging maps displaying either; a spatially resolved image of the median amount of charge collected, or the spatially resolved frequency of events contained within a particular range of deposited energies of interest, as a function of beam position. Results: Results reveal that the new detector structures posses a well defined cylindrical SV, see fig. 1 . An array of these structures successfully provides a greater effective surface area without any degradation of the measured spectrum. Discussion: Given that the second generation detector structures possess a well defined cylindrical SV, they have the potential to improve upon the performance of the previous silicon microdosimeter design. Conclusions: Silicon microdosimeters have the potential to play an important role in proton therapy QA. The CMRP has successfully tested a silicon microdosimeter at various international proton therapy institutes. Recently a second generation silicon microdosimeter has been developed. Charge collection imaging reveals the new device possesses a well defined cylindrical SV, as required for an improved performance. Introduction: Proton computed tomography (pCT) is a novel imaging modality that has been suggested as a means for maximizing the potential benefits of proton radiation therapy. Currently, proton therapy treatment plans are carried out based on data acquired from X-ray CT scans. However, there is a well documented uncertainty in converting CT numbers to electron densities, which are the values required by the treatment planning software to predict how the dose will be deposited within the patient. Proton CT employs proton-by-proton energy loss measurements to directly obtain a 3D map of relative electron densities, minimizing the uncertainty of Bragg peak location at treatment time. In order to achieve desired spatial resolution in the reconstructed image, reconstruction algorithms must employ a path approximation formalism that takes into account multiple Coulomb scattering of the protons within the patient. Therefore, algorithms that can handle non-linear paths are required. Algebraic techniques that involve cyclic processing of many linear equations are one such example. Although these algorithms result in images of superior quality to those reconstructed with transform methods, such as filtered backprojection, far more computing time is required. Therefore, a time efficient method of reconstructing the pCT projection data must be developed. This presentation outlines the process of image formation in pCT; from measurement, to path calculation, to specific image reconstruction algorithms aimed at reducing the time required to reconstruct pCT images. Methods: The Monte Carlo simulation toolkit GEANT4 is used to simulate a prototype pCT system. Using this data, images are reconstructed using various algebraic image reconstruction algorithms on a quad core machine. Calculating the relative error between the reconstructed images and the known phantom composition allows for a comparison of image quality. The amount of time saved by employing block iterative (BI) reconstruction algorithms (SART, BICAV and DROP) as opposed to a fully sequential algorithm (ART) is also demonstrated. Results: Figure 1 illustrates relative error for each algorithm as a function of cycle number. Discussion: When the data is divided into one projection angle per block (180 blocks), all the BI algorithms investigated were found to demonstrate faster convergence than the fully sequential ART. On the quad core machine used in this study, the use of block iterative algorithms was found to speed up the reconstruction process by a factor of 3. Further acceleration is possible with a greater number of processors, or with graphical processing units. Conclusions: Although image reconstruction in pCT may be a time consuming task on a single processing unit, BI image reconstruction algorithms allow the workload to be divided over multiple processors, thus reducing the time required to reconstruct images. The results from this study indicate that of the BI algorithms tested, DROP provides the best quality images when applied to pCT projection data. Introduction: Following the implementation of a quality system, the ARPANSA radiotherapy calibration service was accredited to ISO 17025 1 by the National Association of Testing Authorities (NATA) in 2007. In addition, ARPANSA calibration reports are now recognized under the Mutual Recognition Arrangement (MRA), whereby the results will be accepted in other countries (subject to the terms of the arrangement -most notably that the level of agreement between countries is listed in the BIPM Key Comparison Database 2 ). We discuss the quality manual and procedures which underlie NATA accreditation and MRA recognition. We present historical data which are used to check the quality of calibration results. Results: The history of chamber calibration coefficients for a particular chamber type and the ratio of absorbed dose to water to air kerma calibration coefficients are used to check that new calibrations fall in the expected range. For example, for NE2571 chambers, N D,w should be in the range 45.55 r 0.36 mGy/nC, N K in the range 41.55 mGy/nC r 0.34 mGy/nC and N D,w /N K in the range 1.0962 r 0.0016, where the range is twice the standard deviation of the historical values of 16 different chambers. The data from which these numbers are derived are given in We also present results from site visits where ARPANSA staff have visited radiotherapy centres and collaborated in comparisons of accelerator output. The results of these comparisons indicate consistency between the photon doses measured by 16 different centres at the level of r1% (Fig 2) . This result confirms the consistency of the calibration coefficients supplied to these centres. Site visits also provide the opportunity for feedback regarding the improvement of the calibration service as required by ISO 17025. out measurements of linear accelerator outputs under standard reference conditions. This Level I dosimetry audit is used to verify the accuracy of dose calibrations. The paper presented describes a collaborative project to develop an additional dosimetry audit to verify the accuracy of the delivery of radiotherapy doses to patients at prescription points in an anthropomorphic phantom. This Level III dosimetry audit is intended to test the full treatment chain, including treatment planning, thereby providing verification of the overall accuracy of delivery of prescribed radiotherapy doses. The NRL, acting as New Zealand's regulatory authority, will utilise the method developed to conduct biennial audits at all radiotherapy centers. Methods: A CIRS anthropomorphic chest phantom (see Figure 1 ) and a MOSFET dosimetry system were purchased for the project. The MOSFET's reproducibility, energy dependence and angular dependence were assessed. Following the commissioning of the MOSFETS the dosimetry system was trialed at Christchurch Hospital's radiation oncology department. Measurements were carried out in the phantom to test the MOSFET's response in both lung and soft tissue. With clinical input a simple treatment plan and a more complex plan (involving lung tissue and wedges) were devised. These plans were designed to test the entire treatment planning and dose delivery process. An audit protocol was written and tested at two radiotherapy centres. Results: Commissioning measurements for the MOSFETs revealed that the calibration factors were energy dependent, and each MOSFET required individual calibration. Reproducibility was found to have an average standard deviation of 2% on standard sensitivity and 1.2% on high sensitivity in the dose range of 1 -2 Gy. When operated at 90 degrees to the beam axis a drop in response of 6% for 60 Co and 3% for 6 MV photons was observed with regards to 0 degrees. This indicates that angular dependence of the MOSFET was also a function of energy. Based on this commissioning work, an uncertainty of ±3.4 % at the 95 % level of confidence has been estimated for measurements of dose in the region of 2 Gy. The results for the initial treatment protocol trials produced agreement between the planned doses, and the measured doses to within the calculated uncertainties for 2 of the 4 treatments delivered Discussion: The performance of the MOSFETs was found to be not entirely consistent with the manufacturer's claims, and careful characterisation of the MOSFETs was required in order to determine accurate doses from their raw readings. Despite such characterization, there was disagreement between the calculated doses and measured doses in 2 of the 4 trials. Teething problems were encountered in the audit protocol which may partially account for this, as well as the fact that the MOSFETs used in this trial were relatively old, and their performance was probably not optimal. Lessons learned from these trials have been incorporated in an updated treatment protocol, and for future work new MOSFETs will be purchased to improve the system performance. Conclusions: Overall, the commissioning data for the MOSFET dosimetry system (with associated calculated uncertainties), and trials of the CIRS chest phantom with the audit protocol have shown that the procedure and equipment are suitable for audits of the complete dosimetry chain in external beam radiotherapy dose delivery. Introduction: ARPANSA offers a mailed thermoluminescence dosimetry (TLD) audit service for megavoltage (MV) photon beams to all radiotherapy providers in Australia and New Zealand. The audit is designed to be included as part of a quality assurance program. It is based on the IAEA/WHO 1 audit service and its purpose is similar in nature to audit services offered by other countries such as that provided by the Radiological Physics Centre 2 . The audit is also beneficial to ARPANSA in that it tests the validity of the calibration factors supplied to the radiotherapy provider if their reference chambers are calibrated by ARPANSA. The participating centre is sent capsules containing LiF-100 powder and an irradiation jig for simple experimental setup. Two measurements are possible: a) Reference beam output The capsule is place at a depth of 10cm (or 5cm) in a water phantom and irradiated with a 10cm x 10cm field with a dose as close as possible to 2 Gy. b) Beam quality Two capsules are irradiated, one at a depth of 10cm and the other at a depth of 20cm providing a D 20,10 or TPR 20,10 value. The dose to the capsules is determined by comparing their response with control capsules irradiated at ARPANSA with a known dose and applying linearity, energy, fading and holder correction factors. ARPANSA measured reference beam outputs and beam qualities are reported to the participant as the ratio of clinic-stated to ARPANSA-measured. The results to date are shown below and represent data from five separate clinics. All beam energies are plotted on the graphs and range from 6 MV to 18 MV with the most common energy audited being 6 MV. The uncertainties quoted are 4.2 % and 3.8 % for the reference beam output and beam quality measurements respectively. These are also presented on the graphs. The average of all the data points is represented by the solid red line. The average difference between clinic-stated and ARPANSA-measured reference beam output and beam quality is 0.3% and 1.2% respectively. The small number of beam quality measurements made so far may contribute to the larger deviation from unity for the average of the measurements as compared to the reference beam output measurements. One reference beam output measurement and one beam quality measurement have fallen outside the uncertainty limits as shown on the graphs. Both these results have been repeated with the repeat measurements falling within the uncertainties quoted. The reasons for the initial discrepancies could be due to any number of setup, readout or random errors. The ARPANSA mailed MV photon TLD audit program provides a simple quality control measurement for radiotherapy providers. It also verifies the calibration factors supplied by ARPANSA to the provider. All reference beam output and beam qualities measured have fallen within quoted uncertainties thus far with a small number requiring follow up measurements to ensure compliance. 1. http://www-naweb.iaea.org/nahu/dmrp/tld.asp 2. http://rpc.mdanderson.org/rpc/ where either protocol can be implemented. Therefore to simplify dosimetry measurements at WBCC the 80 kV beam was re-classified as medium energy so that the routine dosimetry setup of both 80 and 100kV beams would be the same. This project highlighted some key discrepancies between the low and medium energy protocols of TRS 398. Similar inconsistencies exist between the low and medium energy protocols of TRS 277 as well. These internal inconsistencies of a given code of practice will be discussed. Methods: Measurements were made in air and in water for 80 kV 2.23 mm Al HVL as per the low and medium energy protocols of TRS 277, and on surface of a Perspex phantom and at 2 cm depth in water using the low and medium energy protocols of TRS 398 respectively. These measurements were compared within and between each code of practice. NRL was invited for an investigation of these findings; they confirmed our measurements for low-energy protocols of both TRS 277 and TRS 398. NRL also highlighted some issues in the code of practice from the standards laboratory perspective. Results: Significant discrepancies were found within and between both codes of practice for the low and medium energy protocols. Discussion: The origin of the discrepancies that have been encountered in the change over from TRS 277 to TRS 398 appear to be principally related to the chamber perturbation factors, and the determination of percentage depth doses. The specific issues with these factors will be presented. In light of the above findings, it was decided not to re-classify the 80 kV beam but continue to measure it as per the low energy protocol of IAEA TRS 398 on the surface of a Perspex phantom. This decision was made with the recommendation by NRL. Introduction: Image guided brachytherapy offers the ability to accurately deliver high doses of radiation in a small number of treatment fractions. However, with the rapid fall off in dose, small geometric uncertainties in source position can lead to significant errors in dose delivery. Furthermore, in the absence of a record and verification system and often the pressure to plan and treat the patient in a very short period of time, the potential for error is large. International studies 1,2 have demonstrated that an audit process can identify geometric and dosimetric errors that may significantly impact on clinical outcomes. Despite these risks and international evidence, a brachytherapy audit has not been conducted in Australasia. To address this issue, local hospital, university and industry funding was sourced to run a pilot study with the intention of designing a Level III (anthropomorphic) dosimetry audit that will be open to all Australian brachytherapy sites. In this paper we describe the design of the pilot study, the challenges that we have encountered, the results from TLD measurements and centre survey. Methods: Seven sites from 4 states were invited to participate in the pilot study. The cylindrical phantom, which is similar in design to that used by the ESTRO group 2 , was filled with water and imaged using the centre's standard imaging protocol for a multi-catheter implant (ie orthogonal radiographs or CT). A treatment plan was generated to deliver 1Gy to the central (1 of 3) TLD from 6 dwell positions. The phantom provides three channels that contained the catheters provided by the local facility. Each catheter was used to deliver the dose to the TLDs. The TLDs were calibrated using an in-air calibration jig. The calibration factor for the ion chamber used to determine the air kerma rate of the source (used to calibrate the TLDs) was derived from the National Physical Laboratory (UK) primary standard and an interpolated value from the Australian laboratory (ARPANSA). The purpose of the pilot study was to verify the feasibility of a brachytherapy audit, define the methodology and quantitate the uncertainties in the measurement process. All centres that have so far participated in the pilot study were able to image, plan and "treat" the phantom as instructed within a 2 hour time period with the exception of one centre that uses a PDR source. All TLD results from the phantom measurements were within 5% of one another with the exception of one subset of TLD readings where an error in programming the dwell positions was detected. TLDs provide a convenient method of measuring dose and their small size is appropriate in a high dose gradient, though initial calibration of the TLDs revealed a systematic 8% difference between TLDs irradiated in the air-jig and the water phantom and tests are currently underway to confirm a TLD energy response as a cause for the discrepancy. Most centres use a well chamber to determine or verify the air kerma rate of their Ir-192 source and small differences in calibration standards add to the uncertainties in the TLD readings. Ion chamber calibration factors determined using 2 methods from 2 standards laboratories were in close agreement (<0.5% difference). International studies have demonstrated that geometric and dosimetric audits can identify errors that can impact on clinical outcomes. Australasia has not yet had the opportunity to participate in a brachytherapy audit and until this happens we are unable to verify that all centres that have introduced complex treatment techniques are able to provide high quality, accurate brachytherapy treatments. The pilot study described in this paper suggests that an audit is feasible and capable of detecting errors in treatment delivery. Introduction: Where two fields are junctioned the jaw calibration becomes far more important than when there is no junction. A difference of 2mm can lead to field nonuniformity of up to 35% along the junction. Field junction QA should therefore be carried out when junctions are used regularly to ensure that there is no overdose or underdose along the junction. Junction QA is often performed qualitatively as quantitative QA often requires the use of film dosimetry which is time consuming and requires specific software. The technique introduced uses a regular QA device which is already set up as part of fortnightly QA. Results are quickly achieved and can be compared quantitatively in a period of minutes without the time consuming use of film dosimetry. Methods: A commercially available 1D diode array (Sun Nuclear Profiler) was evaluated as a QA tool for determining the jaw mismatch along a junction. By comparing the diodes on either side of the central axis (CAX) the relationship between junction mismatch and diode reading was evaluated. Even though the diodes are placed 2.5 mm on either side of the CAX reasonable results of the junction mismatch can be achieved by this method as penumbral influences are pronounced at a distance of the width of the penumbra, and become more apparent with mismatch. The resulting relationship between nonuniformity is therefore expected to be approximately linear for small shifts becoming non-linear for large shifts. By using the diodes on either side of the CAX an accurate measurement of jaw offset can be calculated. This concept can be extended to IMRT film QA, where the jaw calibration is checked by moving the junction through the IMRT range of fields for the MLC leaves, combined with a visual film check. Results: There is a linear relationship between the average value of the diodes next to the CAX and the offset in leaf position (mm). Quantitative estimates of the leaf offset can be made with an accuracy of 0.1mm. Combined with film this technique can be used for MLC leaf banks to give a quantitative estimate for leaf calibration required for IMRT QA on a central leaf. The leaf bank can then be checked for qualitative estimates. Discussion: Although this technique is very good for jaw calibration and junction QA checks on a single plane it has not been extended to oblique junctions or junctions between different modalities. There is a field size limitation for junctions off the CAX and the 1D array is unable to check all MLC leaves. Conclusions: Field junction QA is achievable and accurate when measured with a 1D diode array. This QA can be extended with the use of film to include IMRT QA, but is not suitable for oblique incidence or multi modal junctions. Introduction: Medipix is a hybrid x-ray detector designed to provide energy selective images at high spatial and temporal resolutions. This spectroscopic capability gives control over the received x-ray spectrum to be analysed and additional information on material composition through variations in mass attenuation. A brief feasibility study was undertaken, which used plain radiography to successfully demonstrate the practical application of this technology to biomedical imaging. The subtle variations in absorption that Medipix can detect are most useful if overlying tissues do not dilute the effect; it was therefore decided to construct a CT scanner to further demonstrate the clinical utility of the detector. Methods: A desktop CT scanner was designed that would enable us to image small (up to 80mm diameter and 200mm length) animals and biological samples to demonstrate the novel spectral information that Medipix can deliver. The scanner was designed to take full advantage of the high spatial resolution that Medipix offers (55μm pixels) and to access the most energy information possible within the limits set by detector material and low energy charge-sharing effects. This was a prototype scanner, built to prove both the detector's abilities and the scanner design; it had to be safe, robust, architecturally flexible, transportable and affordable. The scanner was successfully built and our first biological sample gave promising results, image quality was sufficient to show fine detail (43μm voxels) at the base of the mouse's skull and the three reconstructed energy bands gave noticeable variations in relative absorption through different tissues. The scanner performed well, in that it gave acceptable image quality and demonstrated that spectral CT yields additional information through variations in mass attenuation. There are numerous changes we can make to the scanner design and operation that will improve image quality. The prototype scanner fulfilled all the goals we set at the start of the project and is flexible enough to allow the many changes that will contribute to improved image quality. When combined with additional improvements through image processing and specimen preparation, the scanner should provide a very useful tool for further investigations into spectral CT. Introduction: The Medipix All Resolution System (MARS) is a 3D spectroscopic imaging CT scanner based on the Medipix photon processing x-ray detector. The Medipix hybrid silicon pixel detectors, which were developed at CERN, are capable of high spatial and temporal resolution. Images can be obtained at different energy thresholds offering the potential for spectroscopic x-ray analysis. The present desktop MARS scanner ( Figure 1 ) was constructed through a collaboration of researchers from the University of Canterbury and the Canterbury District Health Board. Mechanical calibration procedures and image quality tests will be presented and the operation and the status of the scanner will be discussed. The MARS scanner combines a broad spectrum micro focus x-ray tube and the energy selective x-ray detector Medipix-2. The detector and the x-ray tube are held in a rigid geometry and are capable of being rotated through 360q. Samples, including pathology specimens and small animals, of up to 90 mm diameter can be positioned in the centre of rotation. A MATLAB interface is used to control the stepper motors positioning the scanner and to read and record imaging data. A modular approach is used in the design of the software and the hardware, both of which are being continually refined. A calibration phantom has been constructed to verify the alignment of the central beam axis, the sensor translation axis, the axis of rotation and the sample movement during scans. Amongst others, measurements of the beam profile of the x-ray tube and tests on the image quality have been performed. The MARS scanner has undergone a number of design improvements. The current version of the scanner has been in operation since early 2008 imaging phantoms, pathology specimens and mice. Measurements of the physical alignment during the scanning process were performed. Experience has been gained in the operation of the system prompting a number of design improvements. The experience in the use of the MARS scanner and the results of the alignment measurements have been used to improve the design, operational processes and image quality. In diabetes or myocardial infarction, heart cells adapt to physiological, geometric and loading changes in the cardiac muscle that arise from hemodynamic and geometric changes or pathological processes. This leads to regional thickening or thinning of the ventricular wall, and enhancement or degradation in regional muscle function. Constitutive roperties of the myocardium are important for modelling regional variation of mechanical function of the heart muscle, and can also be a useful clinical indicator of disease. Magnetic resonance imaging (MRI) can be used for the investigation of heart motion and cardiac disease effects, due to its ability to non-invasively quantify 3D changes in geometry and function of the heart. Diffusion Tensor MRI (DTMRI) measures the preferred orientations of the local self-diffusion of water molecules in biological tissues, and has been shown to correlate with myofibre direction [2] . Fibre orientation is an important determinant of myocardial wall stress and exhibits a large regional and transmural variation, however, can not be directly measured [3] . In this study, we integrated left ventricular (LV) structural information obtained from in vivo tagged MRI, pressure recording, and ex vivo DTMRI to estimate stiffness of passive LV myocardium during diastole [2] . A set of evenly spaced material points, derived from reconstruction of the 3D motion of the heart based on tag positions during the cardiac cycle, provide accurate kinematic data (strain) for modelling LV mechanics. A mathematical model of the LV was constructed using the epicardial and endocardial surface contours segmented from the tagged MRI (Fig. 1a) . Fibre orientations extracted from DTMRI were incorporated into the geometric model (Fig. 1b) using a host mesh fitting technique. Using this integrated model, a finite deformation elasticity problem was then solved for early diastolic filling to simulate the passive mechanics of the LV. The stress-strain behaviour of the ventricular myocardium was modelled using a transversely-isotropic constitutive relation [4] . Minimizing the difference between the predicted and measured deformation allowed us to estimate the stiffness of the ventricular myocardium. Results: Given the predicted deformation provided by the simulation, and actual deformation from the end-diastolic tagged MR images, the value of the in vivo stiffness parameter C 1 (initially 1.2 kPa [4] ) was tuned to be 2.2 kPa, corresponding to a predicted end-diastolic LV cavity volume of 23 ml, which matched the experimental estimate. The displacement of a set of regularly spaced material points was used to assess the accuracy of the passive mechanics model prediction. The overall RMS error between the predicted and tracked coordinates was 0.41 mm and a 3D color map of the individual errors for all material points is shown in Fig. 1c . The error map will be useful to determine regional material properties in order to minimize individual predicted deformation errors. (a) (b) (c) Conclusions: We have developed finite element based modelling methods to integrate cardiac structural and functional observations derived from in vivo canine MR tagged and LV cavity pressure data, and ex vivo microstructural information. Simulation of the diastolic LV mechanics using this model and subsequent comparison against observed deformation of the LV from cine tagged MR images allowed us to estimate the LV muscle stiffness constitutive parameter. This type of modelling may be used to provide us with insight into regional distributions of myocardial stiffness and stress, and functional measures such as local energy consumption. Models between healthy and diseased states will also allow us to investigate the underlying mechanisms for LV dysfunction. Introduction: Near-infrared diffuse optical tomography (NIR-DOT) has been developing and evolving continuously over the past 15 years. It is a promising tool to probe highly scattering media such as biological tissue to recover and localize variations in optical properties from which functional parameters can be extracted. We study the development and performance of a DOT system using a single light probe and multiple detectors. The light transport through a tissue is diffusive in nature and can be modeled using diffusion equation under certain constraints. The experiments are carried out by inserting single and multiple inhomogeneities of size 6 mm of high optical absorption coefficient in an otherwise homogeneous phantom, while keeping the scattering coefficient same. Diffusion equation for photon transport is solved using Finite Element Method (FEM) and Jacobian is modeled for reconstructing the optical parameters. The inverse problem is solved using Model Based Iterative Image Reconstruction (MoBIIR). The simulation result with a single light probe shows that the system is capable of resolving inhomogeneities of size 6 mm, when the absorption coefficient of the inhomogeneity is three times that of the background tissue. To validate this result, a prototype model for performing this experiment is developed. The high frequency (100 MHz) sinusoidal is used for modulating a laser beam and is propagated through the tissue.The amplitude and phase changes are found to be disturbed by the presence of the inhomogeneity in the object. The experimental data (amplitude and the phase measured at the detector) are used for reconstruction. The results show that a single source system is capable of detecting multiple inhomogeneities of sizes of 6mm in diameter. The localization error for a 6 mm inhomogeneity is found to be 2 mm. We discuss the development of single source system for detection of absorbing and scattering inhomogeneity using the principles of diffuse optical tomography. The photon transport in multiple scattering media such as human tissue can be described as a diffusive process and can be modeled through diffusion equation 14 . The diffusion equation (DE) in its simplest, time-independent form is given as where I(r) is the photon flux through r, k(r) is the photon diffusion coefficient. In a single source DOT system, we illuminate the phantom with a NIR laser source, modulated by 100 MHz. The intensity and phase measurements are taken on the detectors placed on the other side of the phantom facing the source. For each of the source position, 13 detector measurements are taken. The source is rotated by an angle 30 and the intensity measurements are carried out. This is repeated for all various source locations around the phantom. The commonly used reconstruction algorithm for single and double source system is model based iterative image reconstruction (MOBIIR). The photon transport in highly scattering medium is modeled using time independent DE. The MOBIIR attempts to recover the optical properties P a and P s ' by repeatedly solving the forward problem starting from an initial guess of the optical properties and updating them in each iteration guided by the Jacobian of the forward operator. The iterative scheme involves repeated implementation of two major steps: (i) calculation of Jacobian and solving the perturbation equation connecting the update vectors P a and P s ' to the difference between the experimental data and the computed estimate of it and (ii) implementation of the forward operator through the updated object. The forward operation is implemented through finite element method (FEM) discretizing and solving DE with Robin boundary condition. The object is a homogeneous circular disk of absorption coefficient 0.015 mm -1 with an embedded inhomogeneity of size 6 mm and absorption coefficient of 0.055 mm -1 . Simulation and the reconstructed image using the experimental data are shown in Figs 1(a) and 1(b) . For a phantom with dual inhomogeneities, the simulation result of an object with background absorption coefficient of 0.015mm -1 and two embedded inhomogeneities of sizes 6 and 8 mm is shown in Fig. 2(a) . The absorption coefficient of the inhomogeneity is 0.055mm -1 . The reconstructed image using the experimental data is shown in Figs. 2(b) . The inhomogeneities are very clearly seen in the reconstructed image. Discussion: The single-source scheme, considered for a detailed study here facilitates detection of a single as well as dual inhomogeneities of sizes 6 and 8 mm. The reconstructed object clearly detects the inhomogeneity. However, the localization error for a 6 mm inhomogeneity was found to be 2 mm. Introduction: There is preliminary evidence to suggest that female athletes who undergo sustained, high-impact training tend to experience prolonged second stage of labour compared to non-athletes. Although very few data are available on pelvic floor morphology and anatomy, studies have shown possible links between muscle hypertrophy and long-term, highintensity training that involves activation of the pelvic floor muscles. It is postulated that these changes in morphology (including both muscle size and tone) may contribute to birth complications for athletes during the second stage of labour as the fetal head descends through the maternal pelvis. In this study, we will investigate the effect of muscle size on vaginal delivery using individual-specific pelvic floor models and an anatomically-based fetal skull model, by comparing the biomechanical response of the levator ani (LA) muscle of an athlete and a non-athlete. This is part of an improving modelling framework, for which the ultimate goal is to model a realistic vaginal delivery by taking into account available information (e.g. detailed LA anatomy) to help clinicians assess the risk of natural versus caesarean birth prior to labour. We have obtained two sets of magnetic resonance (MR) images of the pelvic region from a study conducted by Kruger et al: one athlete and one non-athlete, both are nulliparous (women who have never given birth). Thirteen components of the pelvic floor (bones, organs and muscles) were traced from these two data sets and fitted with cubic Hermite finite element meshes using our in-house software CMISS (www.cmiss.org). Fig. 1 shows the pelvic floor model created from the non-athlete MR images. The fetal skull data was provided by Dr Rudy Lapeer from the University of East Anglia by laser scanning a skull replica produced by ESP Ltd. This data was extrapolated into 3D by introducing a constant bone thickness of 5 mm. To mimic the second stage of labour, the fetal head was passed through the LA over several displacement steps. Simulation started as the fetal head came into contact with iliococcygeus in an initial occiput anterior orientation. The path of the head was controlled by defining the spatial positions of several nodes on the head. The LA muscle was fixed at its attachment points to the pubis (anteriorly), ischial spines (laterally), and coccyx (posteriorly). The biomechanical interaction between the head and the passive muscle was modeled using finite deformation elasticity with contact mechanics. Results: Two important factors were evaluated from the simulation to measure the level of difficulty during vaginal delivery for each individual: the amount of force required to deliver the fetal head, and the maximum muscle stretch. We have found a more than 40% increase in peak force in the athlete model compared to the non-athlete, indicating the need for more effort for athletic mothers. The maximum principal stretch ratio exceeded 3 for both the athlete and the non-athlete. The amount of stretch was similar at similar positions of the two models. Discussion: Biomechanical modelling of the second stage of labour is challenging with very sparse data on the mechanical and morphological properties of the pelvic floor. We plan to further improve our modelling framework by introducing muscle fibers into LA. The childbirth process can be influenced by many other factors, such as pelvic floor muscle activation, difference in muscle stiffness between the athlete and non-athlete, and fetal head moulding. These are likely to cause significant changes in the stress distribution and hence kinematics of the pelvic floor muscles and the birth outcome. Conclusions: A female athlete model exhibited the need for more force during delivery, while the amount of stretch experienced were similar between the athlete and the non-athlete model. The model will be further developed to include additional information such as anisotropy and muscle activation. (1) where N describes the number of lung units open at a given ventilator pressure P. Let P peak denote the pressure at the peak of the experimentally derived N data (P) curve and set N peak = N data (P peak ). Differentiating Equation (1) and setting to zero, yields 4 3 peak ( ) ln x P x . Further analysis of the N data (P) curve also yields analytical formulas for 1 x and 5 x in terms of the measured data (details not shown). Thus only two unknowns 2 x and 3 x which are analogous to the mean and standard deviation of a standard log normal distribution. The major advantage of Equation (1) is that the log normal distribution can be shifted to obtain more flexibility in fitting the model. Eight ICU patients with several PEEP settings are chosen and the two unknowns 2 x and 3 x are determined to best fit the measured PV data. The resulting curve in Equation (1) represents the patient's level of lung recruitment at the given PEEP. The lung model is then used to predictions lung response to changes in PEEP and the results are compared to a standard normal distribution which fits the mean and standard deviation. The log normal type distribution in Equation (1) enabled an accurate description of the entire PV curve, including the transition periods at the start of inflation and deflation. The normal distribution was only capable of matching the steady portion of the curve. Better prediction of the lung response to different PEEP changes was also obtained using Equation (1). This improvement was particularly significant in 4 of the 8 patients which had distributions quite different from a normal distribution. The results suggest that a log normal type curve is a better representation of the distribution of alveoli recruitment and decrecruitment and warrants further clinical investigation in the future. In particular CT scans could be used to estimate how many alveoli are open or closed to better quantify how log normal the distributions actually are, as well as validating the current lung model. The significantly improved prediction of lung response to PEEP changes increases confidence that this overall minimal modelling approach to lung mechanics will result in improved MV protocols in the ICU. Conclusion: A log normal type distribution is found to more accurately capture lung dynamics associated with PEEP changes in 8 ICU patients than a currently used normal distribution. This approach enables a wider range of patients to be modelled accurately, thus improving clinical utility. (2005) reported that breast cancer is one of the most common causes of female cancer death in New Zealand. X-ray mammography is currently recognised as the gold standard for screening and diagnosis of breast cancer. Since mammographic X-ray images are 2D projections of a 3D object, localisation of features identified within the breast volume is not trivial. Furthermore, mammograms represent highly deformed configurations of the breast due to compression, thus the tumour localisation process relies on the expertise of the clinicians. The primary objective of this research is to remove this subjectivity of tumour tracking and address the above limitations through the development of a physics-based biomechanical model of the breast that can realistically predict mammographic compressions, and thus reliably track visible tumours to an uncompressed configuration of the breast. We have constructed a modelling framework using in-house software CMISS (http://www.cmiss.org) to numerically simulate the mammographic compression. The biomechanical finite element (FE) model was based on the finite elasticity theory coupled with contact mechanics in order to cope with large deformations required for imaging mammograms. An experimental framework was set up using homogeneous, incompressible, and isotropic silicon gel phantoms undergoing two large compression modes, in order to validate the modelling framework. The compressed configurations of the gel phantom were imaged using a 1.5T MR scanner, from which the surface data and locations of four internal markers were segmented. These segmented data were used to evaluate the accuracy of the simulated models. Having validated the robustness of the modelling framework, a preliminary study was conducted to simulate a compression of the breast from a healthy volunteer. A MR breast coil was used to apply latero-medial compression of 32%. The breast tissue was assumed to be isotropic, homogenous and isotropic, and modeled as a neo-Hookean material type, with an individual-specific c 1 value of 0.1 kPa. The modelling accuracy was evaluated by comparing predicted deformations against the segmented skin data and locations of three identifiable features of the compressed breast (obtained from MR images). Results: For the gel phantom validation study, the surface root-mean-square errors (RMSE) were less than 2 mm and Euclidean errors for tracking the locations of internal markers were less than 3 mm for both compression modes. The model predictions on the compression of the volunteer's breast was similar, where the surface RMS error was 1.5 mm and the Euclidean errors for three identifiable internal features were less than 5 mm, apart from one feature whose size was considerably larger than the other two. The unique feature of the model is that the MR images of the uncompressed state can be embedded to the FE model, and transformed according to the deformation gradient tensor F from the FE simulation result (Fig. 1 ). The warped images may be directly compared with experimental MR images of the compressed breast using resampling and 3D image registration techniques. Discussion: For improved modelling accuracy, we plan to further investigate the validity of the modelling assumptions made for the volunteer study. In particular, the heterogeneity of breast tissues (adipose and fibro-glandular tissues) and the fixed displacement condition of the breast tissue adjacent to the pectoral muscles will be scrutinised. The incorporation of the structural compartments, such as the skin and Cooper's ligaments, will also likely influence the mechanical behaviour of the compressive deformation. We have developed a breast biomechanical model to establish a spatial mapping between 2D mammograms. Such a model can also serve as a multi-modality image registration tool to identify and track tumours seen in mammograms, MRI, and ultrasound. Such a multi-modality screening approach has shown to improve sensitivity and specificity of breast cancer detection. Angela W.C. Lee 1 , Vijayaraghavan Rajagopal 1 , Jae-Hoon Chung 1 , Poul M.F. Nielsen 1 and Martyn P. Nash 1 1 Bioengineering Institute, University of Auckland, Auckland, New Zealand Introduction: Breast cancer is one of the most common causes of cancer death for women worldwide. In order for clinicians to diagnose and manage breast cancer, various modalities such as mammography, magnetic resonance imaging (MRI), and sonography may be used to image the breasts. However, each imaging modality displays information about the breast tissues differently 1 . Researchers have found that a combination of all three modalities leads to more effective diagnosis and management of breast cancers 2 . In order to aid clinicians in interpreting the breast images from different modalities, we have developed a computational framework for generating individual-specific, 3D, finite element (FE) models of the breast 3, 4 . Medical images are embedded into this model, which can then be solved subject to finite elasticity mechanics to predict the deformation and warped view of the image 5 . In order to analyse the accuracy of the model predictions, normalised cross correlation (NCC) was used to compare the model-warped images with the clinical images of different gravity-loaded states. A biomechanical image registration tool of this kind will aid radiologists to provide more reliable diagnosis and localisation of breast cancer. Methods: An individual-specific FE model was created in order to model the large deformations that the breast undergoes due to gravity loading. A female volunteer was asked to lie in different positions to allow MR images of the breast to be obtained under different gravity-loading states. MR images were acquired using T2 weighting (TE=102, TR=6560). The image dimensions were 512x512 pixels with a 350x350 mm field of view and 2.5 mm slice thickness, with 52-60 slices through the supero-inferior direction. The prone MR images were segmented to isolate the breast tissues and a tricubic Hermite FE model was created from the geometric data. This biomechanical model was deformed in accordance with finite deformation elasticity theory to predict the undeformed state of the breast tissues. This neutral state was then deformed into the other gravity loaded orientations of the breast, such as the supine position. The prone MR images were embedded into the original model and warped in accordance with the predicted deformations of the breast tissues, using a large deformation FE modelling application (CMISS) 5 . These warped images were then resampled to allow direct comparison with the equivalent MR image of the gravity loaded breast [ Figure 1 ]. In order to analyse the accuracy of image alignment using the biomechanical breast model, NCC was used to compare the warped, resampled images with the clinical images. In addition to giving a global error measure, NCC can be used to give regional measures of accuracy. [a] [b] [c] We have focused on developing biophysically based models of the breast to constrain the image deformation to physically realisable configurations, thus potentially leading to more realistic alignment. NCC was used to get a quantitative measure of error when breast deformations were predicted using the biomechanical model. Our initial results have suggested that further work is required to improve the accuracy of the biomechanical model. In addition to comparing the global alignment of the model-warped images with the clinical images, localised NCC can be used to look at the regional variation in the image. This will be helpful in indicating the areas in the biomechanical model that require improvement. Furthermore the local displacement errors in each region could be derived from the localised cross-correlation to provide a more meaningful and informative measure of accuracy to the user. 1 (ii) To examine the importance of integrating respiratory management of critical illness with improved methods of managing sedation and agitation and the impact of this combined approach on patient outcome. Modelling the fundamental dynamics of physiological systems, particularly for different therapeutics (e.g. ventilation) can enable more optimal therapies and implementation, as well as the potential for automating their application or creating decision support systems based on objective data. In essence, good fundamental models expand the simple data measured into a larger, more powerful clinical context so that they can be more objectively applied to improve care. A novel model of lung mechanics that estimates threshold opening pressure (TOP) and threshold closing pressure (TCP) of alveolar units captures the fundamental characteristics of acute lung injury and ARDS. Changes in the distributions of TOPs and TCPs can be used to inform the clinician about the level of de-recruitment and the impact of changes in therapy. While good clinicians may require little assistance, management of ventilation and weaning is only as good, like a rugby team, as the worst player in the unit. Similarly, understanding the patient condition and dynamics is a direct lead-in to diagnosis, particularly if used to diagnose levels of function, such as the percentage of ARDS affected lung. Patient agitation is a universal phenomenon in the critically ill. The result of suboptimal management is either dangerous agitation or over-sedation. This impacts on significantly on intensive resources with increased length of stay, resource utilisation, and morbidity such as ventilator associated pneumonia. Current practice is very much an 'art form' that relies heavily on the intuition and experience of nursing staff. It is hypothesized that by combining greater resolution of agitation metrics using signal processing of physiological data, pharmacodynamic models of patient responses to therapies and better drug delivery algorithms could vastly improve on current standards of practice. Finally, new clinical research using a multimodal approach to weaning through improved ventilation and sedation is showing promising results, though not without significant challenges. Summary: Sub-optimal ventilation therapies and over-sedation have significant adverse impact on intensive care resources. Interventions to optimise both methods of management may result in significant reductions in resource utilisation, morbidity and mortality. In the last 30 years there have been major changes in ethical attitudes to the ventilation of spinal injured patients. Driven by the demands of patients, technical developments in ventilators have enabled the safe and reliable delivery of treatment in the community once considered only possible in a hospital environment. This has required increasingly flexible thinking from Patients, Doctors, Nurses and Technical support staff and the questioning of previously held belief systems regarding risk management. Ventilators seen as the gold standard for Intensive Care and Anaesthesia purposes are too large and complex for use in the home or on a wheelchair and this need for capable, compact, lightweight ventilators with long endurance and a failsafe user interface has lead to the development of a number of suitable devices. The microprocessor revolution combined with new construction materials and high energy battery technology has enabled the achievement of all of these objectives. New modes of ventilation developed in intensive care in response to the results of physiological research can now be applied in the field because of the greater configurability of modern hardware and software. The establishment of "Care Bundles" which provide guidelines to Patients, Providers and Funders have greatly enhanced the quality and safety of services provided to this complex group of patients. Objective: To review the recent developments in clinical practice and clinical engineering as they relate to improved management of respiratory diseases in critically ill patients and patient outcomes. Background and Review: Intensive care resources rarely match demands regardless of how much money is invested in providing acute care services. It is thus imperative to balance costs and benefits to ensure equitable distribution of resources. Improvements in respiratory care involve many different aspects of the patients "intensive care journey". The impact of the following on patient outcome will be reviewed: x Newer modes of ventilation to improve patient synchrony with mechanical ventilation x New concepts and methods of ventilation in ARDS x The importance of education and training to support and standardise treatments x Development of protocolised care; the good, the bad and the ugly side of managed care. x Using high quality audit to improve practice and to generate hypotheses for future clinical research Summary: Poor application of up-to-date clinical practice and/or misunderstanding of the complexities in the multimodal care in patients with severe respiratory disease remain one of the greatest challenges in critical care medicine. This study compared treatment plans in patients with squamous cell cancer of the oropharynx using conventional 7-field IMRT, which has become the standard technique for most head and neck cancers in our Centre, with the VMAT technique. Methods: The CT datasets of ten patients previously treated in our Centre using conventional IMRT were used to replan the treatments using VMAT. The target volumes, doses, organ at risk volumes and dose constraints were unchanged between the plans. The planned dose to the planning target volume (PTV) based on gross disease was 60 Gy in 25 fractions, with 50 Gy in 25 fractions to the PTV at risk for subclinical disease. Dose constraints were set for the spinal cord (<50 Gy to up to 10 cm of cord), brain stem (<53 Gy to 2/3 of organ), parotid and submandibular glands (<15 Gy to 2/3 of organ). Six patients were planned using a single arc VMAT technique. Field-size limitations dictated the use of two arcs in four patients. Cumulative dose-volume histogram (DVH) parameters and equivalent uniform dose (EUD) were calculated for each plan for both PTV and organ at risk volumes. Homogeneity Index (fraction of PTV receiving between 95% and 115% of prescribed dose), a Conformity Index for the PTV, Monitor units (MU) and treatment times used to deliver the fraction were compared between the optimal IMRT plans and the corresponding VMAT plans. The six patients planned with a single arc were also planned using the Varian RapidArc software. The homogeneity index (HI) and conformity index (CI) for each pair of IMRT and VMAT plans were not significantly different. The maximum dose to the spinal cord was not significantly different between VMAT and IMRT; however, mean dose to the spinal cord was significantly lower (p=0.0003) and EUD for spinal cord was significantly lower for the VMAT plans than for IMRT (p=0.002). Mean dose to the parotid was also significantly lower in the VMAT plans (p=0.002). Mean MUs delivered for each fraction was 460 for VMAT compared to 901 for IMRT and mean time taken to deliver a 2.4 Gy fraction was approximately halved. Similar results were obtained for the six patients planned using RapidArc. In patients with squamous cancer of the oropharynx, VMAT was able to deliver treatment plans equivalent to IMRT in terms of target conformity and homogeneity but which exhibited significant reductions in doses to critical structures and significantly reduced MUs and treatment times per fraction. * VMAT pre-existed and is NOT the same as the commercial product released by Elekta last year [1] . Recent published work has shown the TomoTherapy planning system to over-estimate the superficial dose for treatments in the head and neck region [2, 3] . In this study, the superficial doses for TSI using helical tomotherapy have been measured on an anthropomorphic phantom using radiochromic and radiographic film as well as a new skin dosimeter, the MOSkin. A treatment planning CT was taken of an anthropomorphic phantom (RANDO, The Phantom Laboratory, Salem NY). A hypothetical PTV and a brain contour were outlined on the data set to mimic those drawn for a patient receiving TSI at the UW Hospital. A helical tomotherapy plan was then generated using the TomoTherapy treatment planning system (TPS) (TomoTherapy Inc. Madison, WI). Gafchromic EBT film and Kodak EDR2 film were cut to the cross-sectional shape of the phantom and placed between selected transverse slices of the phantom. In addition, EBT films were cut into twelve 6x1.5cm 2 pieces and placed on the surface of the phantom. A novel skin dosimeter, the MOSkin, developed by the Centre for Medical Radiation Physics, was also placed at locations on the surface corresponding to film locations. The superficial dose was found to be accurately calculated by the TomoTherapy TPS. This finding is in contrast to recent reports, probably due to the treatment delivery primarily consisting of beamlets tangential to the scalp. The superficial dose was found to increase from 84% to 103% and 90% to 105% of the prescription dose over the first 2mm depth in the phantom in selected regions of the PTV. The superficial dose was at the prescription dose or higher in some superficial regions due to the bolus effect of the thermoplastic head mask and the head rest used to aid treatment setup. The MOSkin measured a lower surface dose than the EBT film in the four measured locations. Results suggest that to achieve the prescription dose at the surface ( 2mm depth) bolus or a custom thermoplastic helmet should be used. The MOSkin measured a lower dose than the EBT film due to a combination of the shallower effective depth of measurement and under-response to tangential beams of this version of the MOSkin. After applying a first order depth correction to the EBT measurements the MOSkin measurements agreed with the EBT film dose to within 5% in two locations and within 10% of the EBT dose for the other two locations. The surface doses calculated by the TomoTherapy TPS agree with those measured with EBT film for a TSI to within 2% of the prescription dose. These findings are in contrast to previous publications regarding Tomotherapy surface dose calculations. This result is most likely a consequence of this particular beam arrangement, which includes primarily tangential beamlets as opposed to beamlets orthogonal to the patient surface as in conventional head and neck treatments. When the doses from multiple tangential beamlets are super-imposed the depth required for full dose build up is reduced and a greater surface dose is achieved. Introduction: Effective uniform dose optimised intensity modulated radiation therapy (EUD-IMRT) prostate plans are compared with 3D conformal radiation therapy (3D-CRT) prostate plans. Effective Uniform Dose (EUD) 1 is used as a planning objective tool for the rectum organ at risk (OAR). Methods: EUD is a way of weighting the mean dose with a biological parameter. The EUD formulas are: for organs at risk Where in eqn [1] N is the number of voxels, D i is the dose in the ith voxel, and a characterises dose-volume effect. The EUD formalism is easily differentiable see eqn [2] which makes it a good candidate as an inverse planning objective function tool for IMRT. For organ at risk see eqn [3] n is the weight which indicates the structure dependent endpoint 2 . EUD optimised IMRT plans were generated using a radiotherapy treatment planning tool (Pinnacle version 8.0 with DMPO). For the EUD-IMRT plan the rectum DVH is achieved using a single EUD objective to 50Gy maximum with a=6. A homogeneous target dose is met using two book ended EUD objectives 2 or with conventional dose objectives to the PTV. EUD-IMRT and 3D-CRT plans were compared using dose volume histogram (DVH), hysteresis like DVH movement envelopes, and normal tissue complication probability (NTCP) analysis. (a) (b) Both EUD-IMRT and 3D-CRT plans displayed rectal DVH movement envelopes that were below suggested clinical trial dose thresholds (e.g., Pollack et al. 3 ). As shown in figure 1 the NTCP for grade II rectal toxicity using the Kalman -s model 4 is calculated as 2% for the EUD-IMRT plan versus 8% for the 3D-CRT plan. Both trials had an efficiently low monitor unit MU/dose ratio (nominally 400 MU/200cGy) hence the time overheads between conformal and IMRT dose delivery are converging and the secondary cancer risk is likely to be equivalent. Clinical acceptance of biologically-optimised plans may well depend on reliable model parameters, which are derived by fitting these models to heterogeneous clinical data. Fortunately, normal tissue effects data acquisition has advanced greatly in the years since the seminal Emami publication in 1991, 5 although reported normal tissue effect values are still variable. Despite the uncertainty in parameter values, conventional trial comparison can show if there is superiority of EUD objective inverse planned IMRT dose maps, compared to those that are inverse planned using dose objectives or forward planned by humans. In the end, a better dose map is a better dose map. Introduction: Due to the complex nature of IMRT treatment plans in-vivo dosimetry was performed on patients being treated for head and neck cancers where the treatment field was near to or encapsulated the oesophageal and or sinus cavities. Although pre-treatment dose and fluence verification was performed for each patient, the aim of the study was to verify the actual dose inside the patient close to the treatment site. A flexible naso-gastric tube containing thermoluminescent dosimeters (TLDs) was inserted into the oesophageus through the sinus cavity before treatment. Lead markers were also inserted into the tube so the TLD positions could be accurately determined from the lateral and anterior-posterior EPID images taken prior to treatment. The measured dose was corrected for both daily linac output variations, and the estimated dose received from the EPID images. The TLDs used were LiF rods (TLD-100) and the read-out system used was a HarshawQS 5500 with the Bicron WinREMS software. The TLDs were annealed after each use and were routinely calibrated to 2Gy at 6MV to monitor dose response stability. The treatment planning systems used were Plato IPS (Nucletron) and Eclipse IMRT (Varian). The treatment units used for the IMRT treatments were a Varian 2100C and two Varian 21EX machines. The predicted dose for each TLD was determined from the treatment planning system using the EPID images to identify the TLD positions and compared to the measured TLD doses. The results are a combination of over 300 TLD measurements on 39 patients. The measured to predicted dose ratio was 0.997 ±0.056 using a 95% confidence limit. TCP 3D-CRT TCP EUD-IMRT NTCP 3D-CRT NTCP EUD-IMRT The dose response, stability and small size of the TLDs makes them idyllic for this type of invivo dosimetry. This study has shown that TLDs are reliable enough in the verification of the delivered dose from complex IMRT treatments with the treatment planning system. Introduction: Complex radiotherapy treatment plans (e.g. intensity modulated radiotherapy) are typically verified pretreatment using radiographic or radiochromic film dosimetry. Over the past decade, much research has been done investigating the electronic portal imaging device (EPID) as a dosimeter. The aim of this project was to commission a commercially available portal dosimetry system, to replace our film dosimetry method of quality assurance for intensity modulated radiotherapy treatment plans. Methods: For commissioning, we configured the Eclipse treatment planning system to perform a dosimetric portal image calculation of a treatment plan. Secondly, we calibrated the EPID for absolute dosimetry. This calibration enables the software to generate a dosimetric image from the image acquired by the detector. The portal dosimetry software uses the socalled 'gamma evaluation' tool to compare predicted dose with measured dose. The gamma evaluation tool assesses differences in dose as well as differences in position. We assessed the performance of the electronic portal dosimetry system by delivering six IMRT plans that had passed our previous (film-based) quality assurance testing. In addition, we introduced deliberate dosimetric errors by varying the monitor units delivered; plus positional errors, by rotating the collimator. We treated the plans both at gantry zero and the clinical gantry angles, in order to find the most reliable, and realistic method of quality assurance. At each new gantry angle, the EPID was retracted and then brought back out into position. This eliminated errors due to sagging of the EPID which can occur as the gantry is rotated. The treatment fields compared favourably using a 3mm distance to agreement and 3% dose difference tolerance, with an average gamma score for the six commissioning cases of 0.998. The gamma maps easily detected gross dosimetric (monitor unit) errors of ± 10% and gross positional errors of ± 5 degree collimator rotation. The average score was 0.987, using the tighter tolerances of 2mm distance to agreement and 2% dose difference. The gamma score and the gamma map itself provide the physicist with useful information to decide if a verification plan is acceptable or if it warrants further investigation before treatment proceeds. We have been using Varian portal dosimetry clinically since February 2008. There is significant reduction in the time spent performing pre-treatment quality assurance using this method compared with film dosimetry. Methods: A comparison between KonRad pencil beam convolution and CMS XiO (currently used in our department for IMRT planning) superposition dose calculation algorithms in homogeneous phantom and in presence of heterogeneities is presented. The IMRT optimization and segmentation processes used in both systems were evaluated and IMRT verification results from the new KonRad system presented. Results: Good agreement exists between KonRad and XiO planning systems at depths more than1.5cm for homogeneous water equivalent phantom. KonRad overestimates the dose up to 2.5% at the central axis of the beam and up to 13% near the field edges for lung equivalent material as well as slightly underestimates the dose up to 1.9% in and behind bone material compared to XiO. From this study it can be concluded that KonRad system calculation accuracy will not be suitable for IMRT of lung and giving less accurate dose calculation results than XiO superposition algorithm for head and neck cases. The higher speed of dose calculation and the good segmentation algorithm makes it appropriate for IMRT of nearly homogeneous regions such as pelvis irradiations. Introduction: Medical Physics is an essential part of modern medicine. This is particularly evident in cancer care where medical physicists are involved in radiotherapy treatment planning and quality assurance as well as imaging and radiation protection. However, the role of radiation oncology medical physicists (ROMPs) is diverse and varies in different societies. Therefore, it was the aim of the present study to determine the education, role and status of medical physicists in the Asia/Pacific region. Materials and Methods: A simple two page questionnaire was developed addressing seven areas: x Education x Staffing x Working hours and tasks for ROMPs x Professional organisations x Resources x Research and teaching x Overall satisfaction in the areas of professional recognition, remuneration and workload. The questionnaire was sent to eminent physicists in 20 countries in the Asia/Pacific region in early 2008. Results: Answers were received from 16 countries representing nearly 2500 medical physicists. There was general agreement that medical physicists should have both academic (typically at MSc level) and clinical (typically at least 2 years) training. ROMPs spent most of their time working in radiotherapy treatment planning (average 17 hours per week), however also radiation protection and engineering tasks were common. Typically, only physicists in large centers are involved in research and teaching. Most respondents thought that the workload of physicists was high with more than 500 patients per year per physicist, less than 1 ROMP per two oncologists as the norm and on average 1 megavoltage treatment unit per physicist. There was also a clear indication of increased complexity of technology in the region with many countries reporting to have installed helical tomotherapy, IMRT (Intensity Modulated Radiation Therapy), IGRT (Image Guided Radiation Therapy), gamma and cyber knife units. This and the continued workload from brachytherapy will require growing expertise and numbers in the medical physics workforce. Conclusion: As demand for cancer treatment and radiation oncology increases, there is an increasing need for qualified medical physicists. The increasing complexity of the treatment technology also requires a highly skilled workforce with significant continuing education needs. Addressing this will be an important challenge for the future. Acknowledgements: The State-of-the-art pixel semiconductor devices of Medipix type are becoming a very powerful instrument for position sensitive X-ray photon and particle spectroscopy. The semiconductor pixel detector of Timepix type (256x256 pixels 55 ȝm wide) is a successor of the Medipix2 device. Each Timepix pixel can be operated in one of the three modes, i) counting of the particles, ii) measurement of the charge generated by interacting particles (Time Over Threshold (TOT) mode), iii) measurement of the time of interaction. Then, the device can count the amount of charge being deposited by a charged particle in each pixel due to charge diffusion and subsequent charge sharing within the particle cluster or track and these counts can be translated into energy with a proper calibration. This feature makes possible on-line 3D-visualization of particle tracks in solid state sensor. The charge sharing effect influencing the particle track generation has been investigated with Timepix pixel detectors with 300 ȝm thick silicon sensor exposed to X-ray photons, protons, alpha-particles, and ions allowing one to measure sensitively the local Charge Collection Efficiency (CCE) at a 5 keV pixel threshold level. Cluster shapes and sizes measured as a function of the bias voltage for protons and heavier charged particles of various angular incidences on the Timepix devices are reported. Results on the angular sensitive response to 11 MeV protons and highly energetic ions of the Timepix device operated in tracking mode are reviewed. These results demonstrate the visualization of individual particle traces influenced by dE/dx losses. Recent results and ideas about the Timepix device application to position sensitive (2D or 3D) and spectroscopic detection of single protons and ions ions will be presented with the aim to estimate the device potential for sub-micrometric imaging of biological objects. Estimate of the precision achieved in determining of the energy and momentum vector of heavy charged particles will be given regarding possible applications in proton and ion radiography and therapy. It is used to acquire tomographic datasets of specimens with a choice of spectral energy bins at a voxel size of 43 microns. The attenuation coefficient for each material in the sample is energy-dependant. Of particular interest are contrast agents which feature a k-edge at high energies leading to a high absorption. Sections from different energy channels enable us to distinguish between two materials that would look similar in a conventional, non-energy selective CT. The data has been post-processed to correct for tube fluctuations and a Fourier filtering technique has been applied to avoid ring artefacts. The CT reconstruction was done using a Feldcamp-type cone beam back-projection. Results: The scanner has been operational for several months, imaging mice and phantoms with different types of contrast agents. Fig. 1 shows the result from scans on a mouse with contrast agent (iodine) in the gastro-intestinal tract. The 'high-energy' slice (right) indicates a clear difference in grayscale between bone and contrast agent in the belly, which cannot be identified in the 'broad spectrum' slice on the left. Further scans of mice and pathological samples are currently being prepared and will be used to further investigate the new information contained in spectroscopic datasets. New methods for identifying different materials from the CT-reconstructed data in the different energy bins are currently being developed. Initial measurements have proven the ability of the spectroscopic CT scanner to improve multi-contrast-agent imaging. The current technical progress in the development of a CT scanner using the Medipix sensor (MARS) will result in a significant increase in the amount of generated data (high resolution, multi-channel). Thus, visualizing and offering a simple user interface for a clinician, practitioner or scientist will be a major challenge to overcome. However, some research has already been conducted in the areas of multi-dimensional or multi-variate 3D visualization that can be really relevant in our context. Nonetheless, the interaction aspect has been largely neglected over the years, with the research focus largely centred around graphics representation and performance. In this presentation, I will explain the importance and strong relationship between user interaction and visualization and describe how research in other scientific fields can be used in this context. Finally, I will present our first results for multi-channel 3D visualization with the MARS Scanner. During this presentation I will present an overview of OSA and explain the different types of Sleep Studies used to help diagnose Sleep disorders. Transporting critically ill patients within the hospital environment is risky. What are the effects on patients and staff who transport critically ill patients in the aviation environment? The "stress of flight" on patients and the doctors and nurses who care for them will be introduced. Training methods to increase the awareness of working in the risky aeromedical environment will also be discussed. Sandy Inglis REA Hyperbaric Medicine Unit, Christchurch Hospital, Christchurch, New Zealand The hyperbaric environment introduces many challenges over and above normal medical issues. These include increased pressure and gas density, restricted space, fire hazards, minimal equipment, supply & exhaust of gasses, types of breathing system. Difficulty controlling the exhaust system led to increased mask/hood pressure causing increased oxygen levels and therefore fire risk. Also inspired O 2 % is lowered by entrainment of air into the mask. A patient was referred for hyperbaric therapy in the early post-operative period following iatrogenic arterial gas embolism incurred during open heart surgery. She experienced difficulty breathing due to the increased work of breathing (wob) at pressure. Wob involves both inhalation and exhalation, ignoring inhaltion issues for now, the main area of concern was the exhalation system which used an 'overboard dump' meaning the exhaust gases are 'dumped' through the chamber wall to the ambient atmosphere (1ATA). The problem was identified as the uncertain setting of a 'bias regulator' used to minimize extreme differential pressures. Quantifying the exhaust differential pressure was necessary to ascertain the 'ideal' differential pressure. The manufacturer's information was vague. Most hyperbaric units have set their differential pressures by 'trial & error' but we took a more scientific approach. Methods: The bias regulator was initially set to 0.5 bar gauge (8psi) and the exhalation (differential) pressures were measured at various chamber depths (pressures). Measurements were made with and without the bias regulator. The wob was not measured quantitatively. The resulting graph showed (without bias regulator) a 'sweet spot' at about 0.01 bar gauge but was difficult to maintain due to hysteresis. By increasing the differential pressure to a flat 'plateau' area of the graph (0.15 bar gauge) a much more stable operation was evident. This led to a much improved operation (with the bias regulator) of both the mask and head-hood systems and vastly improved patient comfort and compliance. Discussion: Setting the bias regulator too low caused a large swing in exhalation pressures due to hysteresis. And low velocity caused condensation to settle in the pipe-work. Conclusions: Optimal function of the exhaust regulator using a negative spring bias regulator resulted in reduced expiratory pressure and thus lowered work of breathing in the hyperbaric environment, thus enhancing patient safety & comfort. Wob measurement is planned for further research. Palta, Jatinder R With Bias Regulator Without Bias Regulator inadvertent radiation exposure and a model to address broader U.S. healthcare issues such as errors, costs, utilization, and restoration of integrity to the patient and healthcare process. UFPTI is a 98,000 square foot facility including three proton therapy treatment rooms with gantries that administer beams from 360-degree angles, a fixed-beam treatment room, and a simulation suite with positron emission tomography, computed tomography, and magnetic resonance imaging for accurate treatment planning. The Proton Therapy System (PTS) is provided by IBA. The institute also has two state-of-the-art IGRT/IMRT conventional radiation therapy provided by Elekta. After installation and calibration of the PTS by IBA in April, 2006, it was handed over to the physics group for acceptance testing and commissioning. The activities between hand-over of the system and first patient treatment included: 1) system commissioning (relative and absolute dose measurements in water for many different prescribed treatment parameters (range, modulation, snout size, gantry angle, etc.), 2) beam data collection for treatment planning, 3) verification of treatment planning modelling for simple and complex cases (inhomogeneities), 4) validation of the mechanical alignment, 5) validation of the imaging systems (x-ray, laser, light-field), 6) training of therapists, and 7) mock treatments. For the first gantry at the UFPTI the total duration of this phase was 16 weeks (from April 23 until August 14, 2006). The planned duration of each of the activities (Per week, five 9-hour shifts) were scheduled in advance. However, the beam measurements took a bit longer than expected (resulting in measurements in parallel to the training), but overall the plan was followed quite closely. The collection of the treatment-planning beam data took four weeks as scheduled. In addition to four weeks of measurements, considerable time was spent on preparing the measured data for import into the treatment planning system for all options this took about 4 weeks (one person). A special program was developed at the UFPTI to translate measurement files directly into Eclipse import files. Finally dose distributions modelled in an Eclipse phantom were validated against measured data, which took additional 3 weeks. The first patient at UFPTI was treated on August 14, 2006. To realize the full potential of this unique state and regional treatment and research resource to bring about the greatest good for both individual cancer patients and society, UFPTI treats all patients on protocols that specify many details of not only the treatment itself, but the treatment process, with the goal of efficiency and accuracy. We have developed clinical workflow for each disease site and are working on strategies to optimize utilization of the PTS using industrial engineering principles. The focus of this presentations is to, a) describe the process of acceptance testing and clinical commissioning b) discuss the clinical workflow for different disease sites and c) describe strategies to optimize the utilization of the PTS. x • understand the process of acceptance testing, and clinical commissioning of a PTS The reference air kerma rate from 192 Ir HDR brachytherapy sources can be measured using a suitably calibrated Farmer-type chamber and an appropriate in-air calibration jig 1 (Fig. 1) . When a primary standard for 192 Ir gamma rays is available, a calibration for the chamber and jig combination can be determined directly. In Australia, due to the absence of such a standard, the chamber must be calibrated by interpolation between the response in 60 Co and the response in a kilovoltage x-ray beam. Corrections for the effect of the jig, scatter and beam non-uniformity must then be measured or calculated before the reference air kerma rate can be determined. We compare the direct calibration coefficient of a chamber/jig combination which is traceable to the National Physical Laboratory in the UK, with the same coefficient determined from the interpolation of Australian standards and a series of correction factors. Results: A Farmer-type chamber (PTW 30010 serial number 0473) and jig (Nucletron 077.211 serial number 0473) were sent to NPL for calibration using a Nucletron microSelectron HDR Classic 192 Ir source (part number 096.001) 2 . The calibration coefficient was This coefficient gives the reference air kerma rate R K -defined as the kerma rate to air, in air, at a distance of d ref = 1 m (in the absence of any scattering or attenuation from air, the room or any other object other than the source) -in terms of the ionization current (averaged over the two catheter positions and corrected for ambient temperature and pressure). At ARPANSA, the response of the chamber at 192 Ir can be estimated by interpolating 3 between 60 Co and one of the higher energy kilovoltage x-ray beams, giving 4.890 u 10 7 Gy/C for 192 Ir. Correction factors have then to be applied in order to obtain the response of the chamber in the jig in terms of the reference air kerma rate: The resulting chamber/jig calibration coefficients differ by 0.2% which is within the combined standard uncertainties of 1.2% and 0.6% reported by ARPANSA and NPL respectively. Introduction: High dose rate (HDR) brachytherapy using interstitial catheter implants is an effective method for treating prostate cancer. Accurate delivery of fractionated schedules depends on minimal catheter movement between treatment planning CT (TPCT) and subsequent fractions. We determined the degree of catheter displacement between TPCT and subsequent fractions using implanted fiducial marker seeds and x-ray images from a mobile fluoroscopy unit. The effectiveness of adjustments made using verification x-rays was examined. Methods: In our centre prostate patients, who receive HDR brachytherapy, are typically treated with external beam radiotherapy to 46Gy followed by interstitial brachytherapy boost of 19.5Gy in 3 fractions. During needle insertion four fiducial marker seeds are implanted in the prostate (2 base & 2 apex). The seeds are identified in the TPCT and the distance from the tip of two reference needles to seeds is measured. Verification x-rays are taken before each fraction and manual physical adjustments made to the position of the implant needles. At our centre, tolerance for needle displacement is 3mm. We analysed data on all patients treated with stainless steel needle implants in 2007 for needle displacement from the original plan before and after adjustment. Using the needle displacements treatment plans, for a representative subset of patients, was re-planned assuming needle position had not been corrected. Differences in dose to the prostate and critical structures was calculated. Results and Discussion: Ninety-one patients treated in 2007 were analysed. The tip of the needles relative to marker seeds displaced inferiorly 4.6mm ± 3.1 (1SD) prior to each treatment. The greatest displacement was prior to the third fraction, 5.3mm ± 3.4 (1SD). Following physical adjustment of the needles, the mean displacement was significantly reduced, 0.9mm ± 1.4mm (1SD), p<0.001. In 10.3% of fractions, the needle displacement was 8mm or greater. Typically this lead to more than 10% of the target volume receiving less than 80% of the prescribed dose. In 28.8% of the fractions the needle displacement was 5-7mm, this lead to a decrease in the D100 to the clinical target volume of approximately 20%. Conclusions: A significant number of cases demonstrated needle movement outside of tolerance. Needle displacement prior to each fraction may compromise treatment plans. Adjustment using verification films prior to each treatment is a practical and effective method of ensuring that clinically significant over or under-dosing does not occur. Brachytherapy can be improved with better QA of treatment and a reliable real time mechanism for alerting potential radiation incidents. Ultimately by collecting sufficient dosimetric data, the dose response relation for critical structures, such as the urethra, can be reliably evaluated. In-vivo real-time dose monitoring can achieve these improvements. For this application we have developed a customised scintillation dosimeter, with a fibre and scintillator diameter of 0.5mm. The BrachyFOD™ dosimeter design enables real-time readout at less than 0.5 second intervals. Early indications from clinical trials are promising and have led to further modifications of the dosimeter design. The BrachyFOD™ performance as a clinical dosimeter was tested. Its temperature dependence, angular dependence and depth dose relation was determined using a Nucletron HDR brachytherapy unit with an 192 Ir source and compared to other commercially available dosimeters. To determine the stability of the BrachyFOD™ over time, a set of eight BrachyFOD™s were made, four 1mm and four 0.5mm in diameter. Dose readings were taken with each dosimeter once a week for ten weeks under set exposure conditions. The mechanical durability and radiation hardness of the BrachyFOD™ was also tested. Following ethics and TGA approval, twenty patients have been recruited to test the BrachyFOD™ performance in a clinical trial, which commenced in 2007. The BrachyFOD™ was located in the urinary catheter in the urethra for prostate HDR patients. The measured dose to the urethra was compared to the calculated urethral dose as determined by the CT dosimetry treatment plan. We have customised a scintillation dosimeter with fibre optic readout for dosimetry in brachytherapy. Its small size and flexibility make it suitable for in-vivo intraluminal insertion, easily fitting into the 16 French catheters. The small detector element volume ensures the high spatial resolution needed for the high dose gradients in brachytherapy. The BrachyFOD™ has a high signal to noise ratio enabling accurate dose rate and total dose readings in real time up to distances of 250mm from the HDR source. The additional intervention of inserting the BrachyFOD™ into the urinary catheter was well tolerated by the patients and no physical difficulties encountered for insertion. The time taken for insertion was no more than 3 minutes. For the majority of patients, the measured dose correlated well with the calculated dose. This research will lead to innovative monitoring applications, providing new knowledge to develop the dose response relation for the urethra. This will have relevance to safe dose escalation for improved clinical outcomes, while preserving patients' quality of life. The real-time FOD readout will support clinical decisions to intervene in the treatment should departures from the prescribed dose be detected. Early results from clinical trials show this dosimeter provided useful information, is easy to use and minimally affects the treatment procedure. The The point source approximation method of calculating the dose distribution around a single brachytherapy seed overestimates the dose by over 300% along the axis of the seed, when compared to the TG43 functions. This is due to the titanium casing being much larger at the end caps than along the sides. To compensate for this overestimation, the dose is underestimated by 8% on the central plane of the seed. Over a 142 seed plan, however, the dose discrepancies average out, giving a good match for the external isodose curves. The urethral dose profile, however, is underestimated by over 3% using the point source approximation. Conclusions: Although yielding accurate isodose curves for treatment plans, point source approximations may underestimate the dose to the urethra by more than 3%. For dosimetry calculations involving small numbers of seeds for instrument calibration or around critical structures, the full TG43 calculations should be used. For studying general dose coverage involving a large number of seeds, point dose approximations are adequate. In the last 25 years, the average annual medical dose in the US has increased from ~0.55 mSv to ~ 3.2 mSv. Major contributors to the population medical dose include CT (~1.5 mSv/year) and Nuclear Medicine (0.7 mSv/year), even though these modalities account for only 12% and 4% of all radiological examinations, respectively. In the past year, public awareness of the radiation risks associated with x-ray imaging examinations has increased with several high profile articles appearing in the scientific literature (e.g., JAMA, New Engl J Med) as well as the popular press (e.g. NY Times). The magnitude of the per capita medical radiation dose, or temporal changes in this parameter, should not be a major concern to the medical imaging community per se. Key questions that do need to be addressed are: (1)) whether proposed examinations is indicated; and (2), are patient exposures as low as reasonably achievable (ALARA) without sacrificing valuable diagnostic information. Since an indicated examination is one where the patient benefit is greater than any corresponding risk (detriment), it is vital that imaging practitioners understand the magnitude of the radiation risks associated with each type of diagnostic procedure. In this paper, we describe a method for estimating patient doses (and risks) in cardiac CT angiography where account is taken of: (a) patient size; (b) patient age; and (c) patient sex. It is shown that radiation risks to the average patient undergoing cardiac CT patient (~ 60 year old male weighing 90 kg) are ~ 1 in 1000, with most of the risk attributed to induction of fatal lung cancer. The carcinogenic radiation risk for the most sensitive patients in a typical cardiac CT angiography population would be approximately double this average risk value. Ian Smith 1 , John Rivers 1,2 , John Hayes 1,2 , Wayne Stafford 1,2 and Catrina Codd 1 Background: Electrophysiology (EP) procedures have been reported to carry a significantly greater radiation risk than that of coronary angiography (CA). This is largely due to numerous reports linking severe deterministic radiation effects to the long procedure and fluoroscopy times (FT) involved. This study documents the results achieved through the use of strategies involving operator training and education as well as equipment selection and optimisation to reduce radiation risks. Methods: Records for 661 diagnostic and 1576 therapeutic EP procedures performed between January 2002 and December 2006 were analysed. Data from 1458 diagnostic only CA procedures performed in 2006 was used for comparison. For each procedure type, FT, number of digital frames acquired and estimated effective dose (E) were compared. Strategies included the minimal use of high dose imaging modes, use of minimum frame rates, use of low radiographic density imaging projections and minimisation of detector entrance doses (to achieve clinically relevant image quality). Results: A summary of the data acquired in this study is shown in Table 1 . E has been derived from procedure dose area product using case specific conversion factors. Discussion: Compared to CA the FT for diagnostic EP procedures was found to be similar while the FT for therapeutic EP was significantly longer. However, EP procedures generally are associated with lower E than for CA, the exception being procedures for atrial fibrillation (AF). Overall generalised descriptors of radiation use were found to be markedly influenced by case load spectrum. Conclusion: Through the application of a range of exposure minimisation strategies, the radiation risk to patients undergoing diagnostic and most therapeutic EP procedures (except AF ablations) can be reduced to be significantly less than that faced by patients undergoing CA. E, however, is heavily dependent on procedure type and as such, care must be taken in undertaking generalised comparisons for audit and benchmarking purposes. Fog, L. S. and Cormack. J. PET shielding recommendations have typically been based on a parallel broad beam of radiation [1] . Realistically, however, the emission of 511 keV photons from a positron emitter is isotropic, although the patient geometry affects the final radial distribution of photons; this may significantly alter the attenuation through the wall and the dose to persons positioned behind it. More importantly, current shielding calculations do not take into account the dose from radiation scattered over the shielding wall. The legal height for such walls is generally 2.1 metres, but it is often recommended that they extend from floor to ceiling "just to be on the safe side" The financial cost of overshielding can be very high in PET suites ( the price of lead has increased four-fold since late 2003 [2] ,making things even worse). It has become increasingly important, therefore, to be able to accurately asses the thickness of shielding required for walls, and whether it is really necessary to extend such shielding all the way to the ceiling. A series of Monte Carlo simulations of the dose deposited in a person positioned behind a shielding barrier were carried out using the EGSnrc code. In these simulations, 511 keV photons were emitted isotropically from inside a volume modelling the patient. The patient was modelled by a 20.0×30.0×50.0 cm box of water. In each simulation, 5×10 7 photons were emitted isotropically from a volume extending to 2 cm inside the patient boundary. The geometry used for the simulation is shown in figures 1 and 2. A range of shielding scenarios were modelled, as outlined below: The transmission coefficient derived from scenarios numbers 1 and 2 was approx. 2.5% while that obtained from parallel broad beam calculations is approx. 5.4% [1] . This difference is mainly due to the source geometry rather than scatter from the floor and ceiling. The dose from scatter over the barrier in scenario 5, at head level, was calculated to be 0.50 PGy/h per GBq in the patient phantom. This dose decreased significantly to about 0.07 PGy/(GBq.h) at waist height . For a person positioned 30 cm behind a 2 cm thick by 210.0 cm high lead barrier, approximately 60% of the dose at head level is due to scatter coming over the wall, while at hip level, the dose is due to radiation transmitted through the wall. A simple geometric analysis shows that the personal dose due to scatter is greatest when the person is positioned approximately 130 cm from the barrier. In summary, PET shielding estimates based on parallel broad beam geometry appear to overestimate the transmission through lead shielding walls by a factor of about 2. The dose from scatter over the barrier at waist height is relatively small, but may have to be taken into account if the design dose limit is low. Shielding from floor to ceiling is probably not warranted in most instances for PET gamma emissions; in PET/CT installations, however, a thinner layer of shielding may need to extend to the ceiling of the imaging room to limit X-ray scatter over the wall from the CT unit. Introduction: Radioiodine therapies are performed twice a week at Sir Charles Gairdner Hospital (SCGH). Patients receiving in excess of 600 MBq I-131 are held in one of two radioactive isolation rooms. Medical Technology and Physics at SCGH have recently acquired a floor-washing robot 1 to assist the physicist with decontamination of the room after discharge of the patient. The current project aimed to evaluate the effectiveness of the robot and assess the benefits of automated surface decontamination. Methods: A controlled experiment was performed by deliberately contaminating a linoleum offcut with I-131 (sodium iodide). The extent of fixed and removable contamination was assessed by two methods; 1) direct Geiger-Mueller counting and 2) beta counting wipe tests. The contaminated offcut was then sectioned in two; one half was left untreated and the other half received decontamination by the floor-washing robot. The extent of contamination was reassessed after decontamination. Surface contamination was also assessed in situ on the ward by Geiger counting and wipe testing. Measurements were made before therapy, at discharge of the patient and after decontamination by the robot. Fifteen measurement locations were sampled to represent typical surface contamination levels. A further five "hotspot" locations were identified by Geiger counting at discharge. Results & Discussion: Figure 1 shows a contour map of radioiodine contamination by Geiger counting the linoleum offcut before and after treatment. There was a significant reduction in count rate from 150 cps to 30 cps following automated decontamination. Contamination was removed rather than spread around by the robot. Wipe testing on the ward indicated typical removable floor contamination levels of the order of the derived contamination limit 2 (20 Bq/cm 2 ) and hotpsots 10 times in excess of the limits. The robot was effective in performing automated decontamination, clearing approximately 60 -80 % of removable contamination. One 45 min wash cycle was effective in reducing typical contamination to acceptable levels and hotspots benefited from a second pass of the robot. The robotic floor-washing device was considered suitable to provide effective automated decontamination of the radioiodine ward. In addition to successfully decontaminating the ward linoleum, the robot affords other benefits. The time spent by the physicists decontaminating the room is greatly reduced offering financial and occupational safety and health benefits. Cormack. J and Fog, L. S Introduction: Shielding calculations for diagnostic imaging facilities are generally carried out using the methodology outlined in NCRP Report 147 [1] or the joint BIR/IPEM Report on radiation shielding for diagnostic X-rays [2] . For PET/CT and nuclear medicine shielding, a similar approach is usually employed [3] . Although the methodological approach may be similar, considerable variation is possible in the choice of design dose limits and occupancy factors. For uncontrolled areas, design dose limits of 1 mSv per annum, 0.3 mSv per annum and 0.25 mSv per annum are recommended in different reports. The rationale for the lower limits is that some account must be taken of possible exposure from other sources; although this is undeniably true, there appears to be no solid scientific basis for the limits chosen, and this has been the subject of some controversy [4] . In addition, the choice of occupancy factors for shielded areas is somewhat subjective and, as will be shown in this paper, the way in which they are generally used is somewhat flawed. Methods: Concurrent exposure from multiple sources of exposure (e.g. in a shared control room between two fluoroscopy imaging rooms) is relatively easy to handle; the designated design dose limit for the area can be simply divided by the number of concurrent exposures. Sequential exposures from multiple sources of exposure are more problematic and are poorly handled by current shielding methods. For example, an office, corridor, tea room and bathroom adjoining a CT Room would typically be allocated occupancy factors of 1, 0.2 , 0.2 and 0.05 respectively. The problem here is that the "representative individual" (as defined by the ICRP) is highly unlikely to be exposed in these areas in an "exclusive OR" fashion (e.g., .all of their day in the office OR 0.2 of their working day exposed in the corridor, OR 0.2 of their day in the tea room OR 0.05 of their working day in the bathroom). A far more likely scenario is that they may typically spend (say), 6 hours in their office AND 0.75 hours in the corridor AND 1 hour in the tea room (lunch and tea breaks) AND 0.25 hours in the bathroom (a total of 8 hours per day). Under these circumstances it can easily be shown that, if the shielding is calculated Introduction: Power-assist exoskeletons have been actively studied to assist the motion of physically weak persons. In this study, the walking patterns of the young male adults are analyzed under the different walking velocities. This study aims to identify the relationship between the walking patterns and walking velocities to generate the natural walking patterns of a lower-limb exoskeleton robot in accordance the walking velocity. Experiments have been performed with young healthy male subjects using the VICON system to acquire the angles of hip, knee and ankle joints while human subjects perform walking motion under several walking velocities. Based on the results of the experiments, the walking patterns that can be used to generate the natural walking motion of the lower-limb exoskeleton robot are defined in this study. Methods: VICON MX+ system is used to obtain the each joints angle of lower-limb during walking. The VICON cameras emit strobe light, which reflects back into the cameras from the markers, give a clear grayscale view of each marker ( Fig. 1 and Fig. 2 ) on the human subject. Gait analysis commonly involves the measurement of the movement of the body in space (kinematics). The gait cycle in its simplest form is comprised of stance and swing phases. In experiment, each joint angle is compared in two walking cycles at three different walking velocities (V1, V2 and V3 as shown in Table 1 ). In the experiment, natural walking motions are performed by three healthy subjects at three different walking velocities. When analyzing the data of gait cycle of each subject, starting time of the gait cycle is defined when left leg of each subject touches the ground. The gait cycle begins when left foot contacts the ground and ends when the foot contacts the ground again. Thus, each cycle begins at initial contact with a stance phase and proceeds through a swing phase until the cycle ends with the limb's next initial contact. Stance phase accounts for approximately 60 percent, and swing phase for approximately 40 percent, of a single gait cycle. Figure 3 shows the joint angles at different speeds V1, V2 and V3 of subject 1, respectively. In the figure, (a), (b) and (c) show the hip joint angles, knee joint angles and ankle joint angles of left leg, respectively. From the figures, it can be seen that the walking patterns are changed according to the walking velocity. When walking frequency increases, maximum flexion angle of each knee and hip joint increase from 3-5 degree. The maximum extension angles of each joint also increase from 3-5 degree, when walking frequency increases. Similar patterns are obtained with the other subjects. Discussion: In repeated gait cycle, the knee joint angle increases when the foot is put down on the floor (at flexion2 in Fig. 3(b) ). For all the three subjects, each angle changes with increase of walking velocity in the same manner. The flexion angle of ankle joint in the beginning of the gait cycle increases with the increase of the walking velocity. This phenomenon can be applied to the exoskeleton robot to maintain a good balance and to make the walk comfortable with the exoskeleton. The walking patterns of the young male adults are analyzed under the different walking velocities. The relationship between the walking patterns and walking velocity of normal persons is identified to generate the natural walking patterns for a lower-limb exoskeleton robot. Burwood Academy of Independent Living, Christchurch Introduction: This paper will summarise a project undertaken by Industrial Research Limited (IRL) and The Burwood Academy of Independent Living (BAIL) looking at patient and clinician perspectives of technologies developed for use with stroke survivors. A bilateral upper limb exerciser (BULE) device was developed to allow patients with stroke to exercise the affected weak arm using their unaffected limb as the "driver". This device was linked with augmented reality (AR) games to motivate the patient and provide quantitative feedback/results for both patients and clinicians. Method: Initially chronic stroke survivors were trialled over four weeks on a simple augmented reality game that was designed to exercise the impaired limb in a gravity supported manner over a table top game. Clinical outcome measures pre and post intervention were recorded but the primary objective was to identify patient's perspectives of display parameters, game difficulty, marker positioning and preferred results format. Patients comments and choices were recorded within each session with a focus group interview occurring at the end of the intervention phase. In the second phase of trialling, patient and clinician perspectives of the BULE device were explored through respective focus groups. Separate focus groups looked further at the issues arising from combining the two technologies. Voice recorded transcripts were analysed for common likes and dislikes with the identified issues being used to inform subsequent design changes. Results: In the initial pilot project looking at the augmented reality game alone, no significant difference was noted in the clinical outcome measures recorded. Participants felt that the game had benefits in providing incentive to exercise their limb but were frustrated by the prototype games' lack of progression and results display format. A numerical or time score was desired by patients to allow them to recognise performance improvements and further motivate them. The need for the technology to be robust, uncomplicated and maintenance free was highlighted. In the later focus group discussions regarding the BULE and the BULE/AR combined, clinicians and stroke survivors provided valuable insights into the usability, clinical value and shortcomings of the technologies. Useful design improvements were suggested by the focus group participants. Discussion: Clinicians and patients recognized the potential benefits provided by emerging technologies, specifically the BULE device and AR games. In particular it was felt that these types of devices provide the opportunity for additional rehabilitation activity to occur without impacting significantly on therapist time as patients can potentially undertake this exercise outside of conventional therapy or following discharge from existing rehabilitation services. Traditionally, access to post-inpatient rehabilitation services is very limited and so devices that have the potential to be used in the home environment were seen as advantageous. However, cost and ease of technical use were seen as major potential limiting factors. While common issues were often identified by engineers, clinicians and patients, each had their unique perspectives and priorities that need to be considered and rationalised as part of this process. Conclusion: The value of including end-user perspectives (patients and clinicians) of the technologies as they evolve has been highlighted through this process of pilot projects and focus group discussions. Engineering solutions need to recognise the reality of end-users time constraints, priorities, and lack of technological expertise. The best time to address these concerns is during the design process and as such input from these groups is recommended from the outset. Combining upper limb exercises with augmented reality has potential to provide a self motivating therapeutic exercise regime. In this study, we examined whether PCS patients continue to show disparities in eye movement function at 3-4 months following mCHI compared to patients with good recovery. It was hoped that the assessment of eye movement function might provide sensitive and objective functional markers of ongoing cerebral impairment in patients with PCS, supporting the presence of PCS independently of psychometric assessment and patient self-report. The study is still in progress with this paper presenting preliminary results. The groups comprised 20 PCS-patients and 20 controls (i.e., CHI patients of similar injury severity but good recovery, individually matched for age, gender, education, and time post-injury). The PCS subjects were recruited amongst CHI patients with persistent symptoms who had been referred to the local Concussion Clinic (Burwood Hospital) for further evaluation. Controls were recruited via a database of mCHI patients with known recovery status at 3 months post-injury. All PCS-patients received a clinical evaluation as part of their standard assessment at the Concussion Clinic. In addition, all participants in the study completed neuropsychological assessment and computerized eye movement tests in combination with the assessment of self-perceived health condition by means of standardized health status questionnaires. Eye movements were assessed using infra-red oculography, including reflexive, antisaccade, memory-guided, and self-paced saccadic paradigms, in addition to sinusoidal and random oculomotor smooth pursuit (OSP) tasks. The groups differed markedly in postconcussive symptom levels and problems with activities of daily living. The PCS group had significantly poorer performance on measures of antisaccades, self-paced saccades, and memory-guided sequences, with marginal deficits on OSP. The impaired oculomotor measures in the PCS group included higher numbers of response errors in the antisaccade task, poorer visuospatial accuracy on anti-and memory-guided saccades, smaller number of self-paced saccades, longer saccade duration of self-paced saccades, higher number of eye movements when performing memory-guided sequences of saccades, slower tracking velocity on 60 deg/s OSP and longer lag on random OSP. Neuropsychological functions more affected in the PCS group included executive function, sustained and divided attention, speed of information processing, memory and cognitive flexibility. In addition, the PCS group had substantially poorer scores on the Beck Depression Inventory. Effect sizes of significant oculomotor and neuropsychological differences were equivalent. Discussion: Whilst oculomotor function and neuropsychological tests partially overlapped in identifying foci of impaired brain function, such as stronger impairment of prefrontal function in the PCS group, eye movements suggested impaired function in areas not obvious based on neuropsychological testing, specifically problems in parietal function, deficits in the visuospatial transformation centres in the posterior parietal cortex (PPC), impaired communication of the PPC with subcortical/brainstem structures, and potentially problems with cortico-cortical exchange of information between the parietal and frontal cortical areas. Importantly, the oculomotor deficit profile of the PCS group was not consistent with that observed in non-trauma patients with major depression disorder. This finding differentiates PCS from simply being a complex form of depression and suggests that a high incidence of depression amongst PCS patients is a symptom rather than a (premorbid) cause of PCS. Conclusions: Eye movement assessment may provide additional information about brain function in patients with PCS, offering objective markers of ongoing cerebral impairment. These would be independent of patient self-report and neuropsychological assessment and might be useful in supplementing patient evaluation, providing external confirmation of incomplete recovery. Eye movement testing might be of particular interest in patients who report high symptom levels and cope poorly with activities of daily living but whose neuropsychological test profile is unremarkable. Introduction: Many smart implantable biomedical devices require electrical energy for operation. Transcutaneous Energy Transfer (TET) can provide continuous power to implanted devices without the risk of infection associated with a wire passing through the skin. It is implemented through a transcutaneous transformer where the primary and the secondary coils of a transformer are separated by the patient's skin providing two electrically isolated systems. The electromagnetic field produced by the primary coil penetrates the skin and produces an induced voltage in the secondary coil which is then used to power the biomedical device. Challenges for a TET system relate to the need to avoid heating effects and deal with variable coupling conditions between coils. Implementing resonant circuits, with a characteristic resonant frequency, is a valuable approach to dealing with loose coupling between the primary and secondary coils. In this paper, we present a closed loop frequency based power regulation method which varies the resonant frequency of the primary system to deliver the correct power to the load. Methods: Frequency control involves varying the operating frequency of the primary power converter to vary the power delivered to the load. Depending on the operating frequency of the primary power converter, the secondary pick-up coil is either tuned (maximum power transfer) or detuned (reduced power transfer); there by varying the effective power delivered to the implantable load. The operating frequency of the primary converter can be varied by varying the effective capacitance in the primary resonant tank. This was carried out by a dynamic switching capacitor control method. As illustrated in figure 1 , the closed loop control is implemented by placing two radio frequency (RF) transceivers into the system. The DC output voltage (V dc ) of the pickup is detected and transmitted to the external transceiver. The external transceiver processes the data and adjusts resonant frequency using soft-switching techniques that introduce very few loses and maintain low harmonic distortion in the sinusoidal resonant voltages and currents throughout the system. We have achieved an efficiency of 86% when transferring 15W over a physical gap of 12mm. Trials have demonstrated the temperature rise of implanted components is less than 2 degrees when transferring 7W -a typical load demand for a left ventricular heart pump. We will report on the dynamic response for changes in coupling conditions between the coils due to normal patient movement. Introduction: Behavioural microsleeps ('microsleeps') often occur during extended visuomotor tasks, such as driving. They are of particular concern in occupations in which public safety depends on extended unimpaired performance, such as truck drivers, locomotive drivers, pilots, air traffic controllers, health professionals, and process control workers. Slow-eyeclosure and sudden task non-responsiveness are strong behavioural indicators of microsleeps. Hence, understanding and quantifying the brain mechanisms underlying voluntary slow-eye-closure and task non-responsiveness during a visuomotor task are important precursors in the investigation of the neural correlates of microsleeps. Methods: Five right-handed healthy volunteers (4 males and 1 female, aged 24ņ33 years) participated in the study. Their general health and sleep habits were recorded using an actiwatch, sleep log, and a set of questionnaires. An MRcompatible fibre-optic display system was used to present a continuously-moving 2-D random target (yellow circle, d = 18 px, BW = 0.25 Hz) and the joystick response (red circle, d = 15 px) generated by a PC (1024 x 768 resolution, refresh rate 60 Hz). Participants were instructed to move an MR-compatible finger-based joystick so that the response circle was as close as possible to the moving target at all times except when they had to (1) simultaneously slowly close their eyes and stop tracking when screen went blank (cued slow-eye-closure) and (2) stop tracking but keep their eyes open when the response cursor stopped (cued task non-responsiveness). Whole-brain structural and fMRI data was acquired on a GE 3T scanner. Continuous VEOG and EEG data were acquired using Ag-AgCl electrode caps via carbon fibers (MagLink), Synamps2 amplifiers, and Scan 4.3.2 software (Neuroscan). The vertical and horizontal positions of the joystick were sampled at 60 Hz and stored for offline analysis. Video of the right eye was recorded at 25 Hz using a Visible Eye™, incorporating a visual display and fibre-optic camera in the scanner, combined with a custom-built video recording system. The start and end of cued-slow-eye closure, task non-responsiveness, and spontaneous eye-blinks were identified and marked by expert rating of eye-video and tracking response using custom-built SyncPlayer™ software. Onset times of identified events were used as regressors in general linear model based fMRI analysis using statistical parametric mapping software SPM5 (www.fil.ion.ucl.ac.uk/SPM). Using a fixed-effect analysis of individual subjects with a threshold of p < 0.05 (FWE corrected), we found several brain regions with consistent activation or deactivation (Fig. 1) . Consistent increases in activity in multi-sensory areas including visual and auditory processing areas were observed during cued slow-eye-closure. We also observed decreased activity in the motor cortex during both cued slow-eye-closure and cued task nonresponsiveness. There were no activations or deactivations during spontaneous blinks, which is considered to be due to the stringent FWE based correction used. Discussion: Increased activity in multisensory regions of the brain is consistent with the hypothesis that these areas are involved in maintaining an interoceptive state during eye-closure and, conversely, are inhibited when one is actively engaged in a cognitive task. Deactivation of the precentral gyrus during both cued slow-eye-closure and task non-responsiveness reflects cessation of motor control. By simultaneous recording of fMRI, VEOG, and eye-videos, we were able to determine neural correlates of voluntary slow-eye-closure and voluntary task non-responsiveness when performing a continuous tracking task; both are important behavioural markers of microsleeps and, hence, the fMRI changes associated with them are needed to differentiate them from BOLD activations and EEG activity associated with microsleeps. The frequency and direction of gastric motility are driven by a cyclic omnipresent electrical activity known as the slow wave. Impairments in the propagation of this gastric electrical activity (GEA) induce changes in normal gastric motility. These motility disturbances have been observed in various gastrointestinal pathologies such as gastroparesis, in which there is delayed gastric emptying. Alteration of GEA by external stimulation, referred to as gastric electrical stimulation (GES) has been proposed as a potential treatment for gastroparesis and obesity. We aimed to perform highdensity recordings from pigs in-vivo to improve knowledge of GEA propagation and the effects of GES. Methods: Platform: A 4x8 electrode platform was constructed by arranging 0.3 mm Teflon-coated silver wires at 5mm intervals in a PVC and epoxy-resin base plate. Electrodes were soldered to stainless steel connecting wires, which passed through the base plate to a sleeve constructed of copper shielding and silicone, and eventually to meet a SCSI cable connector. In-vivo mapping: Four weaner pigs weighing 30 -35kg underwent general anaesthesia followed by a midline laparotomy. The electrode assembly (Fig 1a) was positioned over the antral serosa (Fig 1b) with the aid of wet gauze. Saline at 39˸ C was applied liberally and the wound approximated to maintain homeostatic conditions. Recordings were acquired via a Biosemi ActiveTwo device (www.biosemi.com) and filtered using a second order Bessel filter with a cut-off frequency of 2 Hz. Activation times were identified semi-automatically by the most negative gradient for each GEA event. Two needle electrodes (red dots in Fig 1b) connected to a DS8000 World Precision Instruments stimulator (www.wpiinc.com) were inserted into the seromuscular layer of the antrum, and a 5 mA stimulus of 0.05 Hz applied to induce retrograde GEA. Results: Slow waves were consistently recorded with good signal-to-noise ratio. Activation maps are demonstrated in Fig 1c. Slow wave velocity was determined to be approximately 15.1mms -1 for normal GEA in the direction indicated by the arrow in Fig1c. Retrograde GEA was induced through external stimulation and retrograde slow wave velocity was determined to be approximately 12.6mms -1 for retrograde GEA in the direction indicated in Fig1d. (4), cuneus (5) , and bilateral lingual gyrus (6) activated, and inferior occipital (7) , R superior/middle frontal (2) and L postcentral gyrus (3) deactivated during slow-eye-closure while tracking. Precuneus (9) and L middle temporal gyrus (8) activated, and cingulate gyrus (10/11) deactivated during cued task non-responsiveness. faster than what has been previously reported, although little other data is available from pigs. This work forms the basis for future similar experiments in which propagation details including velocity will be accurately defined, and results will be employed for validation of computational models of normal and abnormal GEA under development. Conclusions: Initial results of GEA mapping are encouraging, and provide a foundation for the migration of these techniques to human patients undergoing open abdominal surgery. Introduction: The electrogastrogram (EGG) is non-invasive recordings of the electrical activity of the stomach, measured by the surface electrodes placed on the abdominal skin. From the pacemaker region of the stomach, which is situated on the greater curvature, between the fundus and the corpus, spontaneous electrical depolarization and repolarization occurs and generate the myoelectrical waves, that are named the gastric pacesetter potential, or slow waves. These events spread around circumference and down the stomach to the pylorus. A greater velocity of propagation around the stomach than down, cause developed the ring of excitation, which is the electrical basis of gastric peristaltic contraction and the main mechanism of emptying the stomach from its contents. The main problem in analyzing EGG data is the weakness of gastric signal, (its amplitude is ranging between 100 and 500 V ) and the strong interference of the other organs situated near the stomach. Several methods has been developed to extract the gastric component from EGG recording such as band-pass filter or adaptive noise cancellation, but there are some difficulties in analysis, because the frequency range of the gastric signal overlaps with that of the other organs near the stomach and the artifacts may resemble pathological signals in the shape. In such situation the technique of Independent Component Analysis (ICA) was taken under consideration. This statistical method used to extract the set of independent components from the mixture of signals recorded in EGG combined with running spectral analysis seem to provide a good insight in to the response of stomach for stimulation with water. In the framework of ICA we observed n signals X 1 (t), X 2 (t),…,X n (t) in our case n EGG signals recorded by multichannel electrogastrograph, which are assumed to be a linear combination of n unknown mutually statistically independent components S 1 (t), S 2 (t),…,S n (t). Let's X=[ X 1 (t), X 2 (t),…,X n (t)] T and S=[ S 1 (t), S 2 (t),…,S n (t)] T then X =AxS where A is unknown non-singular mixing matrix. The algorithm of ICA focus on obtaining the source signals S 1 (t), S 2 (t),…,S n (t) only from their mixed measure, by estimating matrix W=A -1 so S=WxX. In this study the Matlab implementation of FastICA algorithm was successfully applied for simulated EGG data contaminated by respiratory and random noise and a real data with the water load test obtain from the healthy subject as an example of correct response of stomach for water stimulation. In a real EGG data analysis ICA method was combined with the adaptive spectral analysis in order to show the electrical response of stomach for the water ingestion in a healthy state and to check the changes in the basic stomach rhythm. The adaptive spectral analysis method is based on ARMA parametric modeling of the EGG signal. Once the adaptive filter converges, the power spectrum of the EGG signal can be computed from the ARMA modeling parameters. In this paper an ARMA Spectral Analysis toolbox on Matlab was used to compute the EGG power spectral density function of an ARMA process for every minutes of EGG data. Results: Figure 1 . presents results of described above analysis. The EGG recording with the water load test is of great meaning, because it is providing useful clinical information about capacity, relaxation or stretch of the stomach wall. Abnormal water load test suggests that disorders of relaxation of the neuromuscular stomach wall are present and gastric neuromuscular dysfunction may be the cause of the pathological symptoms. The proposed method seems to give a new insights into the response of stomach to the provocative test. It is clear that a non-invasive, easy to perform, high-quality EGG signal with provocative test, could aid the physician in the diagnosis of patient with eating disorders. What is more, new drugs development based on the correction of an objective gastric abnormality is also an exciting use of EGG recording, so the new method of analysis for this signal are very desirable. La Trobe University, Melbourne, Australia Introduction: Diabetes is a condition that affects much of the world's population. Statistical trends show significant growth in diabetes-related illnesses and deaths over the past few years 1 . Accurate and regular measurement of the blood-glucose concentration is crucial to managing diabetes. Research into this area has provided evidence that tight control over blood glucose concentration can prevent major impacts to health due to the invasive nature of their measurements. Current commercial technologies simply cannot provide tight glucose management. This study aims to highlight the process of constructing the monitoring device for use with a non-invasive glucose sensor. The monitoring device is comprised of the hardware, firmware and software to deal with the non-invasive glucose sensor. The two most promising techniques to develop a sensor are Surface Enhanced Raman Spectroscopy (SERS) 2 and Optical Coherence Tomography (OCT) 3 . Pending acquisition and tailoring of these sensing technologies, the development of a clinically-oriented monitoring device has been undertaken, and a simulator has been used to provide input for the device. The hardware incorporates four modular printed circuit boards (PCBs) designed for mounting inside a plastic enclosure. The main PCB incorporates internal memory, a real time clock, serial communications port and the power supply circuitry. Three smaller PCBs were designed for the liquid crystal display (LCD) driver, user input/output systems (pushbuttons, lights and alarm) and the analog input circuitry. Together with accompanying firmware, the device samples the analog input and converts it to a glucose concentration through a linear model. However, the device remains flexible in order to support any type of model describing the relation of the input voltage to a glucose concentration. It is expected that the SERS technique will yield samples only once every twenty minutes and the OCT technique, once every five minutes. As such, the sample rate also remains flexible to accommodate the sensor. The device is able to measure glucose concentrations between 1mmol/L and 20mmol/L with a resolution of 0.02mmol/L. Once a measurement is taken, it is graphed on the LCD to show variation through time and the most recent numerical value is also presented. For further analysis, the time-coded measurement is saved into a data log on the on-board memory. Up to two weeks of continuous measurements can be stored and later uploaded to a standard PC through appropriate software. Results: The system accuracy, data storage and data transmission capabilities were tested using a digital oscilloscope. The results in this section showed the system had a mean error of ±1.4%. In terms of data storage, it is expected that a data error will occur once every 14 hours. This error can later be removed through an interpolation algorithm. Data transmission to a standard PC was verified as working correctly. The accuracy of the device is largely dependent on the resolution of the analog-to-digital converter (ADC). The 10-bit resolution offered by the internal ADC of the microprocessor is insufficient. An external dedicated voltage reference with band-pass filtered input signal may significantly reduce the error. Overall system performance was as expected, with all major subsystems functioning adequately. The use of firmware interrupts to handle major tasks has resulted in a device that is able to continuously record sensor data. The outlined development of a continuous blood-glucose monitor allows glucose values to be represented numerically and graphically on a display. The data is locally stored and can be uploaded to a personal computer for analysis. Security on a Healthcare IT Network has been the domain of the IT department. This talk explores the role Biomedical/Clinical Engineers have in ensuring that their patient networks are secure and what it is we need to be aware of to effectively work with and advise the IT department in maintaining the integrity of a secure Healthcare IT Environment. Discussion will cover aspects of VLAN, Wireless Network, Protocols, Standards, Virus updates, Remote Diagnostics and more ! Introduction: Radiation Induced Bystander Effect (RIBE) is defined as "the induction of biological effects in cells which are not directly traversed by ionizing radiation but are in the close proximity to cells that are. This paradigm shift in the target theory is significant from radioprotection point of view and may require the reassessment of radiation damage models currently used in radiotherapy for Tumour Control Probability (TCP) and Normal Tissue Complication probability (NTCP) evaluations. The purpose of this study is to investigate the possible effect of RIBE on cell survival in the dose "cold spot" and the associated loss in TCP. Initially, the human prostate cancer cell line PC3 was investigated for the population doubling time and colony Plating Efficiency (PE), parameters required to assess the proliferation ability of these clonogenes. The doubling time was found to be 48 hours and PE around 60to 70%. Cells were then examined for radiation cytotoxicity and for identification of D 50 (50% kill dose) by irradiating them to 2, 4, 6, 8 Gy using a 6 MV X-ray beam from a Varian 6/100 linear accelerator. The cells were positioned at 100 cm from the beam focal spot and a radiation field of 20x20 cm 2 was applied. Dose delivery parameters were calculated from the same-set-up measurements using LiF thermo-luminescent chips. Chips were investigated for their linearity, reproducibility and individual sensitivity. It was found that 90 MU needed to be used on the linear accelerator for each 1 Gy dose delivered. Having obtained these essential cell line properties, radiation toxicity on these cells due to direct and indirect radiation damage can now be investigated. In the second part of experimental work "cold spot" will be created in the middle of flask using shielding material and then cell survival assessed using clonogenic assay technique. This part experimental work will allow to investigate the potential effect of RIBE on all survival in the reduced dose region within planning treatment volume. Introduction: VMAT * is a novel extension of IMRT in which an optimized 3D dose distribution may be delivered in a single gantry rotation. This optimization algorithm is the predecessor to Varian's RapidArc. VMAT is highly efficient and has the potential to reduce monitor units and treatment times. This radiotherapy planning study compared 9-field cIMRT with VMAT for locoregional radiotherapy of left-sided breast cancer where the primary treatment goal was to reduce the volume of heart receiving 30Gy. Methods: 5 patients previously treated with 9-field cIMRT to the left breast/chest wall and regional nodes, including internal mammary chain, were planned (50Gy/25fractions) using the same contours and CT dataset with VMAT technique. The endpoints between VMAT and cIMRT plans were dose comparison metrics (PTV homogeneity and conformity indices, heart V30, left lung V20, and mean doses to surrounding structures), number of monitor units (MUs) and treatment time. The results were analyzed using a two-sided paired t-test. Results: VMAT using two 190 degree arcs with 2 cm overlapping fields was used due to the large treatment volumes to be optimized and the limited travel range of the multi-leaf collimators. Treatment plans generated using VMAT optimization resulted in similar PTV homogeneity and conformity indices compared with cIMRT optimization (p>0.3) but superior organ at risk sparing. Averaged on 5 patients VMAT reduced heart V30 from 4.2 % to 2.6 % (p=0.01) and left lung V20 from 19.1% to 16.8% (p=0.02). VMAT achieved significant reductions in the mean dose to heart, lungs, contralateral breast and total healthy tissue (all p <0.005) compared with cIMRT. The VMAT plans using 2 arcs reduced the number of MUs by 30% compared with cIMRT. The mean treatment time was 3.7 minutes for VMAT (< 2 minutes per arc) compared with 9.8 minutes for cIMRT delivery (p=0.0004). Conclusion: VMAT optimization employing 2 short arcs (190 degrees each) yielded similar PTV homogeneity and conformity compared to cIMRT optimization with lower mean doses to healthy tissue and specific surrounding normal tissues for treatment of the left breast/chest wall and regional nodes. VMAT reduced the MUs and significantly reduced treatment delivery time compared to cIMRT. * VMAT pre-existed and is NOT the same as the commercial product released by Elekta last year Trent Aland 1 , Jessica Hughes 1 and Greg Pedrazzini 1 Introduction: Traditionally in our clinic, the HDR brachytherapy treatment of cervical cancer has been carried out using a standard metal Fletcher applicator set. Recently, a CT/MR compatible brachytherapy applicator set was acquired in order to allow the CT planning of patients. However, this introduces a complication as the CT/MR compatible applicator set lacks the rectal and bladder shielding found within the ovoids of the standard metal Fletcher set. Therefore, the aim of this study was to investigate the differences in the relative dose distributions obtained when using the two applicator sets, with particular emphasis on the bladder and rectal doses. Methods: A jig was designed and manufactured in-house to provide reproducible positioning of the applicator sets along with strips of Gafchromic film in a mini water tank. Initially, the saturation point of the film was determined along with its dose dependency. The film strips were aligned in order to represent the rectal and bladder planes (Fig. 1) . Multiple strips were exposed using both applicator sets. The exposed film was then scanned and analysed. Relative dose differences between the two applicator sets were determined by comparing profiles taken across the film. Results: Relative dose profiles from CT/MR and standard Fletcher applicator sets were compared over a range of distances measured laterally from the centre of the ovoid. Preliminary findings have shown that over a distance of 5 cm radially from the ovoid, the lack of shielding in the CT/MR applicator set results in an increase in dose up to 6 %, with the maximum found in a plane 5 mm from the ovoid. Discussions: Relative dose differences were most significant within the first 15 mm measured radially from the ovoid due to the presence of the shielding. Therefore, the effect of the lack of ovoid shielding must be considered not only when looking at rectal and bladder doses, but also the cervix distribution in general. The effect of the lack of rectal and bladder shielding within the ovoids of the CT/MR applicator was investigated and has shown to give up to a 6 % difference in relative dose distribution as compared with a standard Fletcher applicator set. Lyman and relative serialty models were used to predict the NTCPs. The survival fraction was calculated based on DVHs and the TCP was estimated using the Poisson hypothesis. The results of the two models predicted no heart complications. The NTCP values for lung were also very small (0.1%) for different source position deviations (1-4 mm) from the centre of the balloon. The NTCP was estimated to be 0.1%, 0.1%, 3.5% and 1.2% for skin complications including desquamation, erythema, fibrosis and telangiectasia respectively with the source at balloon centre. A 4 mm source shift caused nearly 5% increase in the NTCP for developing tissue fibrosis. The source deviation had significant effect on the TCP calculations. A deviation of the source by 2 mm caused an approximate 5% reduction in TCP and 4 mm source shift reduced TCP by 16%. The accurate source positioning of the 192 Ir source for MammoSite brachytherapy in the balloon centre is critical. The source deviation has small impact on NTCP for skin. However, the impact on TCP can be significant, especially since in clinical protocols up to 4 mm in balloon deformation is accepted. MammoSite ® Radiation Therapy System (Cytyc, Marlborough, MA). A.Haughey, C.Rahill and G. Coalter Results: Each MOSFET on the linear array has its own set of calibration values which can be significantly different from each other. The angular dependence was also measured and resulting in deviations of over 20% in some orientations. Data collected from patient treatments range from 4%-50% differences with the TPS. Discussion: The main advantage of MOSFET dosimeters is the instant readout which can be easily converted to dose. However, we have encountered several problems with their use to date in HDR Brachytherapy. Whist every effort is made to ensure the MOSFETs do not move after insertion we cannot be certain they are in the same position during treatment and CT scanning. Also, on some occasions the MOSFETs have been fallen out during patient set-up, they have not been initialised properly or the bias box is not connected during irradiation. It is yet unclear how much the angular dependence is affecting our results. Efforts are on-going to devise suitable ways of ensuring MOSFETS do not change position between CT and treatment. Palmerston North Hospital, Palmerston North, New Zealand Introduction: Radiotherapy treatment of superficial and shallow tumour needs additional care and extended data are required to plan and treat lesions near sensitive organs. The information on dose distribution of different modalities, particularly for small field sizes, is often the concern of clinicians. The isodose distribution and percentage depth dose (PDD) of 50, 100 and 150 keV x-rays (Gulmay D3105 SXR) and 6 MeV electron beams (Siemens Primus linear accelerator) for 1, 2, 3 and 4cm circular field sizes (FS) have been measured. The beams were measured in a radiation field analyser (RFA) tank using photon and electron diodes. The PDD of SXR beam were also measured in a Plastic Water phantom using an Advance Markus (PTW) chamber and compared with the diode results. Results and discussions: The measured isodose curves using diodes for various beams are depicted in Figure 1 . It indicates that the penumbral dose varies with the modality, i.e. the penumbra is up to 0.9cm for 6 MeV electron beams but only 0.2cm for 100kV X-rays. As shown in the Figure the 90% depth dose coverage for 6 MeV beam is found to be 0.7-1.8cm versus 0-0.4cm for 100kV beam. Similarly the 90% width coverage for 4cm FS is up to 2.6 and 3.7cm respectively for 6 MeV and 100 kV beams. The pattern of the isodose curves is obvious. If charts in this format were available, they would help clinicians to speed-up their decision making process with improved accuracy. Conclusion: Charts of isodose distribution, percentage depth dose and profiles around the decision making area particularly for small fields is necessary to increase the accuracy of radiotherapy treatment by choosing correct beam and geometry. Introduction: The ARPANSA superficial X-ray (20 to 100 kV) calibration facility has been moved from a source detector distance (SDD) of 80 cm to 30 cm. The resulting beam diameter has changed from 14 cm to 5 cm. The new distance has the dual advantage of being more clinically relevant and also increasing the signal to noise, which is particularly important for the calibration of small 0.02 cm 3 plane-parallel chambers. The medium energy X-ray calibration facility is unaffected by this change. Monte Carlo (MC) modeling of the low energy free air chamber (LEFAC) showed an unexpected strong scattering from the lead aperture for energies above approximately 80 kV. The process of updating the relevant calibration parameters to the new SDD has proceeded with both experimental measurements and Monte Carlo (MC) modeling of the LEFAC. The MC modeling uses EGSnrc, includes a fluorescence correction, and uses an improved air mixture. The LEFAC has been operating as a secondary standard since 2003. Prior to this it was the Australian primary standard for superficial X-rays. We have taken the opportunity to completely realign the LEFAC and measure and calculate relevant parameters at the old and new distances. To avoid unnecessary delays in calibrations during the updating process, a large 0.2 cm 3 plane-parallel chamber (PTW 23344 sn 0858) calibrated by the BIPM (The International Bureau of Weights and Measures) is being used as a temporary secondary standard. This chamber has also been calibrated on the ARPANSA primary standard of medium energy X-rays. The MC modelling showed that correction factors such as photon scatter, electron loss, and fluorescence are not effected by the increased beam divergence at an SDD of 30 cm. An unexpected strong scattering from the lead aperture into the measuring volume was observed for X-rays above approximately 80 kV. This was confirmed by measurement. Both the model and measurement showed that the scattering drops off rapidly with distance from the aperture. For our geometry the scattering was negligible at distances from the aperture greater than 15 cm for 100 kV. The ARPANSA calibration of the large 0.2 cm 3 plane-parallel chamber (PTW 23344 sn 0858) with the LEFAC is compared to the calibration of this chamber performed by the BIPM. The first results from the realigned LEFAC with the new correction factors show agreement well within 1% with the results from the BIPM, as shown in Figure 1 . The short distances involved in superficial X-ray measurements combined with X-ray energies greater than 80 kV provides the potential for scattering error when using lead apertures. The new small beam size of 5 cm may mean that treatment centres will need to use lead cut outs to match the calibration beam size. The first results from the realigned LEFAC are promising. International comparisons will be undertaken and if these show sufficient agreement the LEFAC will be reinstated as the primary standard for superficial X-rays in Australia. Calibrations of superficial X-rays are currently performed at ARPANSA at an SDD of 30 cm and with a beam diameter of 5 cm. Feedback is welcomed, particularly information about measurement conditions and protocols used by treatment centres. The detectors used in this study were radiographic film, an electronic portal imaging device (EPID) and a 2D chamber array. The 2D array used in this study was a Scanditronix/Wellhöfer I'mRT MatriXX © unit which has been commissioned for IMRT treatment verification. Dose calculations from a treatment planning system (CMS XiO © v4.3.1) were also used for comparison. An assessment of modulation includes change in dose and the size of the field. The following equation has been proposed to assess field complexity: In Equation 1 FC is field complexity, D 1 is the dose at a point in a given field and D 2 is the dose at another point that is a distance ¨d away from the point D 1 . If there is an increase in dose over a given distance i.e. when D 2 is larger than D 1 , field complexity increases. To achieve this quantification of modulation or complexity a 'picket fence' pattern, as shown in Figure 1 , was chosen. This type of pattern allows fields of increasing modulation to be produced by decreasing the 'picket' width and separation. To investigate the performance of each detector fields of varying modulation were measured. From analysis of this data and comparison with the TPS calculations the performance of each detector was determined for different complexity fields. Typical patient treatment plans were assessed including regions of high dose gradients. Results: All measurements were undertaken with a Siemens Oncor Impression Plus linear accelerator at 6 MV photon beam energy. Isocentric setup was used for measurements to best simulate the usual setup for IMRT treatments. Picket fence fields were produced with 'picket' and separation widths ranging from 0.2 mm to 5 cm producing different field complexity values. For each detector the complexity values at which measurements accurately corresponded to the treatment planning system were determined. In regions where low accuracy was seen for the typical patient fields the complexity value was calculated to determine if this was outside the expected range for that detector. The results suggest that the MatriXX 2D array is suitable for QA of most treatment fields but loses accuracy greatly in regions of high dose gradients. Radiographic film produces the most comparable results with the doses calculated in the TPS due to its excellent spatial resolution; however it is not as practical as the MatriXX array. The EPID has good resolution and is practical but needs further characterisation and calibration before it could be considered for use as a QA tool in IMRT. The proposed method allows a field to be assessed as high or low 'field complexity' prior to QA measurements. Fields where film measurements were necessary as well as MatriXX measurements could be determined prior to measurements being undertaken. Thus the use of film for all fields could be avoided and when necessary film could be exposed at the same time as the MatriXX 2D array. The novel assessment method for modulation presented here considers the dose complexity of treatment fields and ensures a more efficient QA process. Further studies will assess clinical IMRT fields to determine the number of situations where two or more detectors are required. Our QA is thus embodied in a set of numbers representing values of gamma throughout our dose distribution. What constitutes a pass? Methods: A commonly used passing criterion is that the number of points where gamma exceeds 1 shall be fewer than some fraction of the total, eg 90%. We argue that this can be misleading, resulting in 'false passes' and 'false failures'. We propose a criterion requiring that, for a treatment to pass, the gamma distribution be contained within an envelope described by a single-tailed Gaussian with standard deviation 1. The sensitivity of the test is adjusted by appropriate selection of the underlying Van Dyk criteria. Results: We will present results, obtained in our local practice, using Sun Nuclear Corporation's 'MapCheck', a popular device for measuring two dimensional dose distribution. We will also demonstrate dose distributions obtained for the same plans by recalculation of volume dose on the plan CT data set, using fluence derived from portal image measurements of the delivered dose distribution. This latter work uses the less well-known Math Resolutions' 'Dosimetry Check', a software package developed by Renner 3 . The results obtained with Dosimetry Check add to our confidence in our analysis of the planar doses. They also demonstrate an application of volumetric gamma, and are further justification for considering the distribution of gamma. Where gamma analysis is used as a tool for assessing the quality of an IMRT treatment, the criterion for a pass should be expressed in terms of the distribution of gamma, rather than in terms of fraction of passing points. Introduction: The Siemens ARTISTE linear accelerator is a new product which has a 160 leaf multileaf collimator (MLC). One of the main differences between the Siemens 160 MLC and earlier versions, such as the 58 and 82 MLC, is that the leaves move horizontally rather than divergently. This change in leaf movement has significantly increased leaf speed. Furthermore, the 160 MLC provides improved conformal shape for small irregular fields, and reduced IMRT treatment delivery time. However, because the 160 MLC is a new design there may be some questions and clinical concerns regarding its ability to improve treatment delivery with reliability. Therefore, before the 160 MLC can be used clinically, its characteristics must be verified and evaluated relative to a previous design. This process is known as commissioning and is a requirement for clinical use. The purpose of this project is to commission the 160 MLC, and thus report on its characteristics, which include leakage, penumbra, leaf positional and movement accuracy, leaf speed and collimator and gantry angle dependents. Methods: Leakage and penumbra measurements were performed using ion chambers and a water tank. Interleaf leakage was measured by scanning along the central axis, perpendicular to the direction of leaf motion. The Y jaw was fully open, while the X1 and X2 jaws were positioned at 19cm and 20cm respectively. The maximum leaf end leakage was measured by scanning along the central axis parallel to the direction of leaf motion, for a 0 x 40 cm 2 field size, with the leaves closed in the centre. Leaf end leakage with the Y jaws closed was also measured using the same method. These measurements were performed at D max and 100cm SSD for a 10 MV x-ray beam. The longitudinal penumbra of a 6 MV x-ray beam at 90cm SSD and depth 10 cm was measured for field sizes 5 x 5, 10 x 10, 20 x 20, 30 x 30 and 40 x 40 cm 2 . The profiles were obtained by scanning along the central axis parallel to direction of leaf motion. These measurements were verified using film. Leaf positional accuracy was measured by verifying the position of the MLC leaves using the light field and radiation field. Graph paper was used to verify the light field, and film was used to verify the radiation field. Leaf speed was evaluated by measuring the time taken for the leaves to travel from a 1 x 40 cm 2 field to a 40 x 40 cm 2 field, and from a 40 x 40 cm 2 field to a 1 x 40 cm 2 field. Collimator and gantry angle dependence was checked by measuring the position of different field shapes at different collimator and gantry angles. Results: The maximum leakage was 0.67% while the average leakage was 0.37%. The maximum leaf end leakage, measured with the Y Jaw open along the central axis was 12.17%, and the maximum leaf end leakage when the Y Jaw was closed was 0.69%. The average longitudinal penumbra for field sizes up to 10 x 10 cm 2 was 0.56 ± 0.05 cm, while for field sizes greater than 10 x 10 cm 2 , the average longitudinal penumbra was 1.04 ± 0.13 cm. The speed of the MLC leaves was -3.47 ± 0.27 cm/s. MLC leaf position was within reasonable accuracy. Collimator and gantry angle dependence checks showed that when the collimator and gantry were rotated to different angles, there was no evidence of sag in the MLC due to gravity. The measured leakage for the 160 MLC is considerably lower than the leakage measured for the Siemens 58 MLC; this is because the 160 MLC leaves do not use the conventional tongue and groove design, but uses a new tilted design, known as the triangular tongue and groove design. By tilting the leaves 0.37° it appears that interleaf leakage is reduced. The leaves of the 160 MLC have a minimum closing gap of approximately 0.4 mm to avoid opposing leaves colliding; this contributes to the overall leaf end leakage. However, the reason for maximum leaf end leakage along the central axis is due to the rounded s-shaped design of the leaf ends. Penumbra width is defined as the distance between the 80% and 20% isodose lines. The average longitudinal penumbra is influenced by several factors; including the distance of the MLC leaf bank from the patient, the design of the leaf ends, and field size. As a result, the penumbra is greatest at the large field sizes. The fact that the MLC is positioned relatively close to the patient compensates for the single focused, rounded leaf end design of the leaves. The 160 MLC can only produce smaller penumbra than the double focused 58 MLC when the field size is 10 x 10 cm 2 or less; for large field sizes the 160 MLC produces larger penumbra than the 58 MLC. This suggests that the 160 MLC may be more suited to IMRT. The results presented here suggest that implementation of the 160 MLC for clinical use would allow improved dose delivery and enhanced quality of treatment, as a result of fast leaf movement, accurate field shape forming, and low leakage. The small penumbra width for fields sizes 10 x 10 cm 2 or less, will benefit most 3DCRT/IMRT fields, and the increased penumbra width for fields equal to or larger than 20 x 20 cm 2 will become less significant to most treatment plans. Introduction: There is a trend towards including treatment couches and immobilisation devices in the dose calculation by the Treatment Planning System (TPS) rather than applying a bulk correction factor to account for the dosimetric effects of these objects. Large air gaps (up to 15 cm) that can occur between the patient and the treatment couch and/or immobilisation device can be particularly difficult for the TPS to model accurately. For example, a posterior field to a patient's axilla treated on a tilted breast board can create a large air gap between the treatment couch and the skin surface. Previous studies [1] [2] [3] [4] have investigated the ability of various TPS's to calculate the dose beyond small air gaps but these were for air gaps within the body or for shallow depths beyond the air gap. The aim of this study was to investigate the accuracy of the Eclipse™ analytical anisotropic algorithm (AAA) for the calculation of dose behind large air gaps, as compared to Monte Carlo calculations and results obtained from measurements. This work is a continuation of previous work with the Eclipse™ pencil beam convolution (PBC) algorithm which used the equivalent tissue air ratio method of inhomogeneity correction. Methods: Central axis depth dose data in water for a 6MV photon beam (Varian Clinac 600), 10 cm x 10 cm field size and 100 cm source to water surface distance were measured behind a range of air gaps (1 to 15 cm) using a parallel plate ionisation chamber. The air gaps were created by supporting water equivalent RW3 slabs (0.2 to 4 cm thick) above the water surface. The dose for each setup was normalised to that delivered by 100 monitor units at 5 cm deep. For each setup, the dose was calculated using the Eclipse™ TPS and DOSXYZnrc software (with a phase space produced by the BEAMnrc Monte Carlo system) and then compared to the measurements to a depth of at least 15cm beyond the air gap. The results from measurement indicate that a secondary build up region is present beyond the air gap. As the air gap thickness increased, the dose was found to reduce at the water surface. For larger air gaps, the dose behind the air gap is also reduced at depth. The Monte Carlo calculations confirmed these results. Eclipse™ (AAA) was found to overestimate the dose beyond the air gap. For a setup using a 2 cm RW3 slab before the air gap, for a 5 cm air gap, Eclipse™ overestimated the dose by up to 3% in the first 1 cm depth in the water phantom beyond the air gap and by up to 2.5% beyond 1 cm depth; for a 10 cm air gap, Eclipse™ overestimated the dose by up to 4.5% in the first 4 cm beyond the air gap and by up to 3% beyond 4 cm depth; and for a 15 cm air gap, Eclipse™ overestimated the dose by up to 5% in the first 5 cm beyond the air gap and by up to 3.5% beyond 5 cm depth. This trend was evident for all setups investigated in this study. Discussion: The Eclipse™ AAA dose calculation overestimated the dose beyond the air gap but was found to predict the presence of a secondary build up region, which increased with increasing air gap thickness. This is a considerable improvement on the calculations resulting from previous work investigating the Eclipse™ PBC algorithm with the equivalent tissue air ratio method of inhomogeneity correction which did not predict a secondary build up region beyond an air gap of any thickness. AAA did not predict the reduction in dose beyond the secondary build up region created beyond large air gaps. The results were similar to those from the PBC algorithm. The errors in the dose calculation by AAA can be attributed to the approximations within the algorithm, particularly with respect to scattered radiation. the dose measured at different depths, without specifying the actual depths used. A linear regression was performed through the data points of the dose difference and mean dose. Results and Discussion: Bland-Altman plots were generated by comparing depth doses from the Farmer chamber with those of the parallel plate chambers. The minimum depth for comparison was 5 mm due to the diameter of the Farmer chamber. For the NACP and Roos chambers, the mean dose difference with the Farmer chamber was 0.1% and a standard deviation of 0.1%. For the Markus chamber, the mean dose difference with the Farmer was 1% and one standard deviation of 1%. The results also indicate a trend for the Markus to over respond with increasing depth, as compared to the Farmer chamber. All three parallel plate chambers were found to suitable for the measurement of percentage depth doses of a 75 kVp x-ray beam, with the result for the Roos chamber having not been previously reported in the literature. In this work, we have demonstrated that the Bland-Altman statistical test can be easily applied to radiation dosimetry. It has been shown to be a powerful method for making useful comparisons between ionisation chambers and an accepted 'gold standard'. From these results, the Markus, NACP and Roos parallel plate ionisation chambers could be used interchangeably with the Farmer chamber for this measurement of the depth dose of a 75 kVp x-ray beam and selected as a new 'gold standard'. analysis of Dose Difference = 3% and Distance To Agreement (DTA) = 2 mm was used to assess accuracy. One of those tests was the chair pattern described by Van Esch et al (Van Esch et al 2002 ) . Results: Figure 1A and 1B below show the comparison of a predicted and measured fluence map of the dynamic chair pattern. Figure 1C shows a predicted fluence profile and a measured fluence profile across the middle of homogeneous region as indicated by the two arrows. The gamma analysis shows that for the prediction and delivery of this chair pattern the gamma score was 0.996 with a maximum and minimum gamma value of 1.1365 and 0.0261respectively. We also obtained profiles across other regions of this pattern including the dynamic leaf gap region and the interleaf leakage region. Introduction: Most commercially available in vivo dosimeters are dose rate and/or field size dependent. In vivo measurements for complex dynamic treatments such as IMRT are therefore difficult, because dose rate and field size change during the delivery of a single field and it is not practical to apply correction factors. Previous work 1 concluded that errors introduced by the field size dependence are reduced by using a 6 cm diameter water-equivalent scatter disk around the detector instead of the traditional metal buildup cap. OneDose TM MOSFET detectors are packaged with minimal overlying material and should be used with additional buildup material. In contrast, the OneDose TM Plus MOSFET detectors have a thin metal disk glued to the top and the manufacturer states that these can be used without additional build up material. This study aims to determine the response of OneDose TM and OneDose TM Plus dosimeters in static fields and complex dynamic fields. Methods: OneDose TM dosimeters (under a 6 cm diameter wax scatter disc) and OneDose TM Plus dosimeters (with no added scatter material) were placed on the surface of a solid phantom and exposed to static square fields and dynamic fields (sliding window, pyramid and inverse pyramid fields) using 6 MV photon beams from a Varian 21EX. Their responses were compared to the response of a diamond detector at d max for the static fields and a 0.125 cc ion chamber at d max for the dynamic fields. Note that the reproducibility and linearity of the OneDose TM dosimeters has already been established in a prior study 2 . Static fields: The OneDose TM dosimeters agreed with the reference dosimeter (diamond detector) to within 3.2% over the full range of field sizes. The OneDose TM Plus dosimeters overestimated the dose in the 1 cm field by 8.2%, predominantly because the 5 mm diameter metal build up cap is still fully irradiated at this field size. This is in contrast to a large water- OneDose + 6 cm Scatter Disc OneDosePlus equivalent scatter disc, which is only partially irradiated when the field size is less than the disc diameter and therefore contributes less scatter to the detector as the field size decreases. The OneDose TM dosimeters agreed with the reference dosimeters to within 5% in all cases and may be useful in vivo dosimeters for IMRT, however actual treatment fields have additional challenges such as oblique incidence that have not been investigated in this study. The OneDose TM Plus dosimeters showed a significant deviation from the reference dosimeters when they were placed close to neighbouring regions of higher or lower dose levels, due to their small metal build up caps which cannot approximate the scatter conditions of a larger water equivalent material surrounding the detector. OneDose TM Plus dosimeters should therefore be used with caution in IMRT treatments. P. Fogg 1 , T. Kron 2 . J. Cramb 2 and D. Taylor 1 1 Peter MacCallum Cancer Centre, Moorabbin, Australia 2 Peter MacCallum Cancer Centre, East Melbourne, Australia Introduction: Radiotherapy treatment plans involving non-customised build-up sheets often involve air cavities between the bolus and their skin. These sites include chest wall, breast, TBI, and other treatment sites using immobilisation masks. The surface dose has been investigated for a number of air cavity sizes, depths and beam geometries in a geometric phantom. Methods: A 6MV Varian EX linac was used to deliver a range of non-opposed clinical field sizes at gantry angles of 0, 30, 45 and 60 degrees. Isocentric measurements were made with an Exradin A10 parallel plate chamber (PPC) where the chamber was positioned at the distal surface of the air cavity. These 'channels' were 30 cm in length, and had widths of 4, 7 and 30 cm. The air gap thickness varied from 0 to 5 cm above the chamber face. Measurements were also made with the gantry rotation along and across the 4 and 7 x 30 cm air cavities. The phantom comprised of 30x 30 cm PTW solid water sheets with a total thickness of 8cm. The chamber remained at the surface of the phantom, with a 0.5 cm build-up solid water sheet forming the 'bolus' for the varying air gap thicknesses. Eclipse (v6.5) dose calculations were performed with a 10x10 cm 2 field at the same gantry angles for comparison with the phantom measurements. The PBC (v7.5.18) algorithm was used with the Modified Batho (MB) inhomogeneity correction. As the air gap increased, the dose measured at the phantom surface decreased. For this bolus thickness of 5mm, central axis surface dose increased as beam obliquity increased. Surface dose was also restored with a decreased width of the air channel. For the infinitely wide air gap, the rate of surface dose increase per angle of obliquity proved to be the independent of the air gaps thickness (see figure 1 ). This held for both smaller and larger field sizes. Discussion and Conclusion: Surface doses beneath the 0.5cm bolus sheet decreased with increased air gap due to loss of electronic equilibrium (the re-build up effect). Our distal surface dose reduction measurements matched those of similar geometries in previous works (1, 2) . For air gaps encountered in clinical chest wall treatments, reductions in surface dose will be less significant, however a comparison of surface doses calculated by the treatment planning system for the above geometries will be presented. We expect that surface doses under clinical air gaps can be estimated for planning purposes from the small cavity data. Results: It was found that the average radius of the collimator central axis rotation walkout was 0.5 mm ± 0.2 mm. The beam profile measurements gave an average flatness calculation of 4.1 % ± 1.7% and an average symmetry calculation of 0.7 % ± 0.3 %. The error in the multileaf collimator leaf position was found to increase from an absolute value average of 0.4 mm ± 0.1 mm on the X 1 side of the collimator jaws to 0.9 mm ± 0.3 mm on the X 2 side. Discussion: The EPID QA test results were comparable in accuracy and stability compared with the current methods used. The EPID offers improved efficiency and gives more comprehensive analysis of the parameters being tested. The difference in beam flatness calculations between RIT and Coherence Physicist (Fig. 1) are due to the differing image registration techniques employed by each package. Centre is external beam radiotherapy of 78Gy to the isodose surrounding the PTV. To accommodate such high doses, these patients all have a bowel/bladder preparation protocol and have implanted fiducial markers in the prostate for daily positioning. The use of daily imaging (orthogonal images or CBCT) provides more information on these patients than would previously be collected. ConeBeam CT images can be used to determine the daily prostate position, daily prostate shape, and external contour variation. We are currently investigating how to use this information to continuously adapt the treatment delivery to deliver the intended volume dose on each occasion. In this presentation I will discuss the data collected and how it might be used to modify our current practice, as well as various models to utilise such information in adaptive radiotherapy. Methods: ConeBeam CT images were acquired on intermediate prostate carcinoma patients weekly. This totalled 8 images per patient. The ConeBeam CT images are used to interpret information such as changes to body contour, patient setup changes and FSD changes. This data can be used to assess the potential differences between the planned dose, average dose and delivered dose of the day. Suitable correlations between the measured data and calculated dose differences will lead to decision models on when and how to adapt the treatment. Daily prostate movement infers a daily isocentre shift which manifests in a dose difference from that intended. Our analysis of dose differences through the patients' treatment courses has indicated local dose variations from that intended of over 5%. Currently we are compiling further patient data to correlate such dose variations with measurable quantities on CBCT and to analyse time trends. Time trend analysis, for example, shows that certain gantry angles (ie posterior obliques) may be more susceptible to FSD variation than others. Conclusions: ConeBeam CT images offer various new sources of information that can be used to adapt daily treatments that can account for geometrical differences in the patient during a course of radiotherapy. Collection and correlation of this data may be used to suggest when and how to intervene and appropriately adapt the treatment to conform with the intended prescribed dose. Introduction: The motivation for developing megavoltage (and kilovoltage) cone beam CT (MV CBCT) capabilities in the radiotherapy treatment room was primarily based on the need to improve patient set-up accuracy. There has recently been an interest in using the cone beam CT data for treatment planning. Accurate treatment planning, however, requires knowledge of the electron density of the tissues receiving radiation in order to calculate dose distributions. This is obtained from CT, utilising a conversion between CT number and electron density of various tissues. The use of MV CBCT has particular advantages compared to treatment planning with kilovoltage CT in the presence of high atomic number materials and requires the conversion of pixel values from the image sets to electron density. Therefore, a study was undertaken to characterise the pixel value to electron density relationship for the Siemens MV CBCT system, MVision, and determine the effect, if any, of differing the number of monitor units used for acquisition. If a significant difference with number of monitor units was seen then pixel value to ED conversions may be required for each of the clinical settings. The calibration of the MV CT images for electron density offers the possibility for a daily recalculation of the dose distribution and the introduction of new adaptive radiotherapy treatment strategies. Methods: A Gammex Electron Density CT Phantom was imaged with the MVCB CT system. The pixel value for each of the sixteen inserts, which ranged from 0.292 to 1.707 relative electron density to the background solid water, was determined by taking the mean value from within a region of interest centred on the insert, over 5 slices within the centre of the phantom. These results were averaged and plotted against the relative electron densities of each insert with a linear least squares fit was preformed. This procedure was performed for images acquired with 5, 8, 15 and 60 monitor units. The linear relationship between MVCT pixel value and ED was demonstrated for all monitor unit settings and over a range of electron densities. The number of monitor units utilised was found to have no significant impact on this relationship. Introduction: Radiation dose is an independent determinant of biochemical outcome. Several recent studies on prostate cancer patients show that in many hospitals patients are treated with higher than conventional radiation dose. Results of the randomized trials confirm that dose escalation improves the biochemical control rates. The gradual escalation of prescription dose affects the dose distributions. Radiobiological models are being used to evaluate and predict the outcome of a treatment plan in terms of both tumour control probability (TCP) and a normal tissue complication probability (NTCP). In this study linear quadratic model has been used to investigate the radiobiological effects of dose escalation in a three dimensional conformal radiotherapy (3D-CRT) and an intensity modulated radiotherapy (IMRT) treatment plans for a prostate cancer patient. Methods: Two 3D-CRT (one with 4 beams, another with 5 beams) and an IMRT treatment plans were developed with an initial prescription dose of 60 Gy in 2 Gy/fraction to prostate. Then the prescription dose was gradually increased from 60 Gy to 70 Gy in 1 Gy steps to investigate the sensitivity of TCP and NTCP to dose escalation. Plans were developed for an increasing number of fractions (30 to 35 fractions) keeping dose/fraction constant (2 Gy/fraction) and also for increased dose/fraction keeping the number of fractions the same (30 fractions). TCP, NTCP and complication free tumour control probability (P+) were calculated using the Pinnacle planning systems. Results: Table 1 shows a brief summary of the TCP, composite NTCP and P+ for three treatment plans for different prescription doses planned in 2 Gy/fraction. Discussion: When the dose is increased, both the TCP and NTCP increased gradually as expected. In all cases TCPs increase with the dose, but as NTCPs also increase with them, overall P+ is not always optimum at the highest prescription dose. The optimum value for P+ was obtained for IMRT plan with the prescription dose 66 Gy planned in 33 fractions with a fraction size of 2 Gy. For high prescription doses, fraction size 2 Gy gave P+ higher than those from the fixed number of fractions. At higher prescription doses, fraction size increased when fraction number was fixed. The large fraction size causes the high NTCP. Accurately delivered IMRT reduces the complication rate among the organs at risk. The characteristics of a proton Bragg peak allow most of the dose to be deposited at depth. This ensures highly conformal dose distributions to be delivered during proton therapy with lower integral doses than those possible with x-ray techniques. The range of a proton beam in tissue is finite, meaning doses distal to the target receive close to zero dose. The choice of treatment modality is also influenced by the risk of secondary malignancy from radiotherapy. There is currently no definitive model to describe secondary cancer incidence as a function of dose, but even doses of less than 6 Gy are believed to be capable of inducing malignancy 1 . Methods: Treatment plans for a prostate cancer patient were generated for intensity modulated radiation therapy (IMRT), passively scattered proton therapy (PT) and intensity modulated proton therapy (IMPT) using an Eclipse treatment planning system. All IMRT plans employed between 5 and 9 fields to ensure acceptable dose conformality in the target and used inverse planning for optimisation. PT plans employed two lateral beams of equal weight, assuming a double scattering technique of beam generation and were forward planned. IMPT plans employed inverse planning. The treatment plans were evaluated through DVH analysis for the PTV and critical structures. Results: Figure 1 shows a treatment plan using PT (left) and IMRT (right) for the same patient. The minimum dose shown is 4 Gy. Both IMRT and PT give acceptable target coverage with higher dose conformality achieved in the PTV by the PT plan. The PT plan irradiated approximately 14% of normal tissue volume in the CT data set to 4 Gy, compared to 33% in the IMRT plan. The rectum received less irradiation in the PT plan, particularly at low doses. The difference between the rectal DVHs of the two plans diminishes as dose increases. The mean dose to the rectum from the PT plan (36.9 Gy) is lower than from IMRT (40.1 Gy). The bladder also receives less dose from the PT plan (9.9 Gy mean dose compared to 23.0 Gy for IMRT). IMPT plans gave better dose conformality than the other modalities with better rectal sparing (30.9 Gy mean dose). Normal tissue exposure was also reduced by IMPT. Discussion: As dose conformality in the PTV for the IMRT plan approached that of the PT plan, the level of normal tissue sparing diminished. Improving tissue sparing in IMRT is only possible by reducing the target dose conformality. The PT plan prescribes a more conformal target dose with less irradiation of critical structures and normal tissue. Adding intensity modulation increases conformality and further reduces normal tissue exposure. Increasing the number of beams can increase conformality in both modalities, but normal tissue exposure would also increase. The lower integral doses predicted in the proton plans may result in lower levels of secondary cancer incidence. However, the proton therapy plans only account for scattered primary protons in the dose calculation and does not consider the dose from secondary particles such as neutrons. Failure to account for secondary particles can result in dose errors in the region of the proton Bragg peak 2 . The current uncertainty in the RBE of neutrons means that from these plans alone, a complete determination of secondary cancer risk from proton treatments is not possible. Conclusions: Proton therapy treatment plans can generate highly conformal dose distributions in the target with better normal tissue sparing compared to IMRT plans. Critical structures are also better spared by the proton plans, particularly IMPT. Introduction: Intensity Modulated Radiotherapy has been introduced in many departments worldwide. This technique reduces the dose delivered to a given normal tissue while maintaining the dose to the target volume, through the use of modulated radiation beams. For head and neck patients this can result in reduced dose to the spinal cord and the parotids. With the introduction of IMRT the dose to the target volume can subsequently be increased achieving greater cell kill in the target volume for equivalent clinical effect to the normal tissues. Another potential benefit to utilizing IMRT for head and neck may be a decrease in the number of fractions (or hypofractionation) while maintaining clinical effect to the normal tissues when compared with conventional treatment plans. As head and neck tumours are highly proliferative, this hypofractionation or reduction in fraction number may improve the resulting cell kill, as well as reducing the treatment time for the patient. This modeling study was undertaken to determine the potential reduction in fraction number of a number of IMRT treatment plans. Methods: For a series of patient data sets both conventional and IMRT treatment plans were generated. The Biological Effective Dose (equation 1) was utilised to consider two situations i)maintaining the number of fractions and increasing the dose to the target or ii)decreasing the number of fractions and maintaining the biological effect to the target. For both situations the acceptable effect to the spinal cord was calculated as the BED for the conventional treatment plan. This BED value was maintained. The dose per fraction and the number of fractions was then adjusted for the IMRT plan to determine either the increase in BED to the target achieved when the dose/fraction was increased or to determine the reduction in number of fractions possible if the BED to the target was maintained. An ҡ value of 10 Gy was utilised for the target volume and a value of 2.5 Gy was utilised for the spinal cord. Discussion: As head and neck cancer is known to be highly proliferative with extended treatment schedules reducing clinical effect, it may be of benefit to consider utilizing the improved dose distributions achievable with IMRT to hypofractionate. This reduction in the number of fractions and increase in dose per fraction to the target volume could be achieved without increasing the BED to the spinal cord from that resulting with conventional treatment plans. In the example presented here a reduction from 35 fractions to 16 could be achieved. If a clinical trial was to be attempted this could be approached in a systematic fashion gradually reducing the number of fractions and increasing the dose per fraction to the target volume. This would minimize the likelihood that any drastic changes in clinical effect would be seen. Prior to commencing a clinical trial other surrounding normal tissues such as the parotids would also be considered. These initial modeling investigations have intentionally not included time factors in the modeling, as this is the more cautious approach. Further investigations could be undertaken utilizing the change in treatment time in the modeling to either improve the cell kill to the target volume or further reduce the number of fractions required. This study has investigated the possibility of reducing the number of fraction sizes with IMRT planning while maintaining the biological effective dose delivered to the spinal cord when compared with a conventional treatment plan. For an IMRT plan which achieves a reduction in dose/fraction to the spinal cord of 1.8 Gy to 1.6 Gy while maintaining dose the target at 70 Gy delivered in 2 Gy fractions the number of fractions could potentially be reduced to 16. [2] [3] [4] have tested the accuracy and performance of the eMC algorithm however in most cases these tests were performed using the lowest calculation grid and highest accuracy, causing an unacceptably long calculation time. It is therefore important to test this algorithm in a way that is clinically relevant and discuss the issues associated with its application in a radiation oncology department. Methods: We expand on previous investigation, testing the performance of the algorithm for clinical configurations and in the variation of the calculation parameters. In this work, we investigate the automatic normalisation of fields calculated with eMC and the changes to field normalisation required for calculations with a 'dose-to-medium' algorithm. An extensive comparison of insert factors was performed at 100 and 110 cm FSD for all available electron energies and four applicator sizes. We used a library of standard rectangular inserts and a manual normalisation method. The performance of the algorithm for small field sizes is tested and compared to the previous Generalised Gaussian Pencil Beam algorithm (GGPB). All testing was performed with a 2.5 mm grid size, 1% statistical accuracy setting and medium level 3-D smoothing. These settings were chosen as an appropriate compromise between accuracy and calculation time. Results: The automatic normalisation function was found to cause small inaccuracies (up to 4 %) in simple situations such as in a water phantom. However for electron fields calculated on patient CT scans, it caused large inaccuracies of up to 20%, The difference in the calculated and measured insert factors was better for the higher electron beam energies. For these energies, the average difference between calculation and measurement was 2% with a maximum difference of 3.8%. For the lower energy electron beams of 6 and 9 MeV, the average difference was also 2% but with a maximum difference of 5.0% and 4.5% respectively. The comparison of calculated dose distributions to measurements in a water tank for a 4×4 cm 2 square insert and a triangular insert showed good agreement in the penumbra region to within 2%/2 mm. The agreement between eMC and measurement remains acceptable until the field size becomes less than 1.5 cm for both the 9 and 20 MeV electron beams. This is an improvement on the GGPB algorithm. Discussion: The MU accuracy results with eMC supported our current practise of manual renormalisation used with the GGPB algorithm. This involves renormalisation of the maximum dose on a profile through the centre of the insert in the beam direction. Due to statistical noise in MC type calculations, normalisation to the maximum is expected to result in an error of up to 1%. Implementation of this algorithm, which calculates dose-to-medium, was accompanied by education of the planning radiation therapist and the radiation oncologist as to the effect this would have on normalization, dose distributions and DVH results. The previously used GGPB algorithm was still made available for situations where dose-to-water was required such as for clinical trials, where previous experience has been with dose-to-water and where the targeted cells are considered to be water equivalent in a non-water equivalent matrix. The new algorithm appears to provide improvements in accuracy for small field size dose distributions but did not always provide accurate monitor units for lower energy fields. These errors for low energies agree with the literature 2-4 . The eMC algorithm provides improvements in accuracy, even when used with a larger calculation grid sizes. However there are inaccuracies in calculated monitor units for low energy electron beams. Implementation of this algorithm requires education of radiation oncology staff as this algorithm works in a significantly different way to those previously used. The use of automatic normalisation and the complication of the dose-to-medium calculation performed by eMC requires careful consideration. In most situations, calculations with our settings did not require additional calculation time as compared to the previous GGPB algorithm. brachytherapy sources. It has been previously shown that some solid phantoms produce significant dose differences as compared to water at these energies [1] [2] [3] . It has also been reported that at these x-ray energies, the use of different materials for backscatter may not significantly affect surface doses as compared to water 2 . For the phantom material to be considered water-equivalent, it should have the same absorption and scatter properties as water. The purpose of this work is to evaluate the water equivalency of a range of solid phantoms for kilovoltage x-rays by Monte Carlo calculations of different relative dosimetry measurements. The physical properties of the solid phantoms studied were taken from the IAEA TRS398 dosimetry protocol and previously published data 4 . All doses were calculated using the PENELOPE Monte Carlo code V2006 for x-ray beams in the energy range of 50 to 280 kVp. The PENELOPE code requires a primary x-ray spectrum that was determined using an analytical program 5 . The accuracy of the primary spectrum was verified by determining the HVL from air kerma calculations with the appropriate beam attenuator. Each phantom material was modelled in PENELOPE as a large cylinder with diameter 20 cm and length of 20 cm. We calculated two sets of doses for all phantom materials and all x-ray beams. Firstly, we calculated the depth dose on the central axis of the phantoms in order to determine the relative attenuation of the x-ray beam as compared to water. We also calculated dose to a small water voxel located at the surface of the solid phantom in order to determine the changes in backscatter due to the different phantom material. The number of incident particles was selected to provide statistical uncertainties of less than 0.5%. Results and Discussion: For the depth dose calculations, the greatest variation occurred for the 50 kVp x-ray beam. The depth doses measured in white polystyrene (RW3) and Plastic Water had the greatest dose variation compared to water by up to +14% and -19% respectively. This is attributed to the presence of high atomic number elements in the phantom, leading to an increased photoelectric cross section. There were also deviations when using the PMMA and polystyrene phantoms, but of a lower magnitude. In comparison, there was very good agreement between depth doses measured in water and the RMI 457 Solid Water and A150 solid phantoms giving less than 2% deviation over the range of depths. As the x-ray beam energy increased, the agreement in the depth dose between the solid phantoms and water gradually improved. However the Plastic Water continued to give the greatest deviation as compared to water in terms of depth doses. For the dose to the water voxel at the surface of the phantom, the changes in dose varied between -4.5% to 10.6% for the 50 kVp x-ray beam as compared to the dose in a water phantom only. The greatest deviation occurred for the polystyrene and RW3 solid phantoms. In comparison, the doses measured in the RMI457 Solid Water were in excellent agreement with doses to water, giving differences of less than 0.5%. These changes in dose for the phantoms studied are consistent with measured dose output factors for a kilovoltage x-ray unit 2 . Conclusion: In this work, we have demonstrated that some solid phantoms can give significant differences if used for dosimetry of low energy photon beams. This occurs whether the solid phantom is used for depth dose measurements or for providing backscatter material. If one had to measure dose in a solid phantom at these energies, the best choice would be the RMI 457 Solid Water, Virtual Water or A150 phantom materials. It is recommended that any dose differences between a solid phantom and water be quantified before the clinical use of the solid phantom, checking the relative absorption and scattering. Figure 1 . 6MV Sp at 10cm depth. Figure 2 . 18MV Sp at 10cm depth. For Sp factors at 10 cm depth, measurement and full MC calculations agree as expected. The simple point source MC model agrees well with full MC model and with measurement at 18 MV, and suggests that (at 10 cm) the off axis decrease in energy is mostly compensated for by the increase in incident fluence. At 6 MV, the difference for field sizes >20cm could be reduced by increasing the incident fluence off axis. It would be interesting to compare measurements and calculations using a point source for a 10 MV x-ray beam and to refine the MC point source model to improve the accuracy at 6 MV. This will be the subject of further investigation. Conclusion: Published MC phantom scatter factors from full treatment head simulators can be used to validate measurements in the clinic. Calculations using simpler point source models may also be used for high energy beams and low energy beams for field sizes less than 20 cm. Introduction: It is hypothesized that for the same treatment dose, fields delivered by proton pencil-beam scanning will deposit less dose outside of the treatment field compared to passively scattered fields. This is because for the former, protons interact with fewer beam modifying devices which will emit scattered primary radiation and neutrons. We utilized silicon-oninsulator (SOI) microdosimeters and microdosimetry techniques to determine the dose equivalent in close proximity to proton pencil beams and compared the results to both Monte Carlo simulations and analogous measurements of passively scattered proton treatment fields. Methods: From measured microdosimetry spectra we determined the dose equivalent deposited up to 10 cm laterally from the center of stationary proton pencil beams and up to 8 cm downstream of the Bragg peak. These measurements were performed using an array of SOI diodes, each with a sensitive volume similar to biological cells, that simultaneously provide the dose deposited and the spectra of energy deposition events within each sensitive volume. Information from the spectra of energy deposition events allows one to determine the average quality factor using established microdosimetric methods and the dose equivalent at each detector position. The small (1.2 x 3.6 x 0.01 mm 3 ) size of the sensitive volume array is ideally suited to this near-field region as high spatial resolution is required in regions of steep dose gradients. Measurements were also performed for passive scattering fields using the same experimental setup and microdosimeter as the pencil-beam measurements. In addition, we modeled the expected detector response with a GEANT 4.8.0 Monte Carlo simulation. Results: GEANT4 simulations show that the dose equivalent lateral to the pencil beam is dominated by contributions from scattered primary protons and not neutrons. These protons are scattered through wide angles in the phantom and upstream windows required for ionization chambers and vacuum. The dose equivalent due to neutron events dominate downstream of the Bragg peak, and therefore the average quality factor differs in this region compared to the lateral region. From the experimental results the average quality factor was 2 lateral to the pencil beam and 7 downstream of the Bragg peak. The GEANT4 simulations were used to virtually scan the experimental pencil beams to produce a 3-D irradiation, which was compared to measurements in the passively scattered field. Results indicate that the dose equivalent in the lateral regions were within +10% to -30% of analogous positions in the passively scattered field. Conclusions: It is important to utilize a dosimeter with high spatial resolution and established quality factors when determining the dose equivalent and average quality factor as the radiation field depends on the detector position. SOI microdosimeters and microdosimetry techniques provide such a tool with which to determine these quantities directly from experimental data. Preliminary results suggest that the differences in the dose equivalent near the treatment field from a scanned 3-D irradiation and an analogous passive scattered field are not significant when the same number of primary protons are delivered to the phantom. We hypothesize that for a given field we obtain comparable dose equivalent in the lateral region near the treatment field using pencil beam scanning and passive scattering (as the scattering of primary protons in the phantom will be the same). Pencil beam scanning can have better conformation of the dose in the proximal region compared to passive scattering and can, however, produce less dose in the near-field region if fewer protons are required to treat a target volume. At the Peter MacCallum Cancer Centre, the eMC (v7.5.14.3, v7.5.18 and v8.0.05) algorithm is used for clinical plans which involve either, a complex surface area, inhomogeneities, build-up material, electron/photon combinations, junctional electron fields or skin to surface distances (SSD) greater than 105 cm. In addition, eMC is used when the electron field represents the main contribution to a radical prescribed treatment course or if dose information to a critical structure is required. As part of the clinical introduction of the eMC algorithm, verification of skin dose under bolus for a range of eMC planned patients was performed. Methods: Patient groups requiring eMC for their planning include en-face head and neck, breast, axilla photon/electron junctions and scalp. The en-face patient plans are varied and have included a large single field or a combination of feathered and junction gap fields. Optimum calculation parameters 1 were based on reasonable calculation times and accuracy levels for our clinical requirements. Energy-specific calculation grid sizes with 3mm CT slices, accuracy of 2 %, and 3D Gaussian 'strong' smoothing were applied for all patients' dose calculations. The plans for a total of 33 patients were included in the study. Electron energies 6 MeV, 9 MeV, 12 MeV and 15 MeV were used in these treatment plans. The complexity of the CT planned bolus depended upon the treatment field location relative to the anatomy and the required isodose distribution. Customised wax masks were created from the plan and single superflab sheets were used depending on these plan and patient specific requirements. TLD chips (LiF:Mg,Ti of geometry3 x 3 x 0.9 mm) were placed on skin at specific locations for the patients skin dosimetry measurement. The patients skin dosimetry under bolus was measured with pairs of TLD's and compared with the eMC calculated dose at the same location obtained from the CT images. Results: The measured TLD doses were corrected for the calibrated linac output at the time of TLD measurement to ensure good accuracy in the comparison. The data was percentage depth dose corrected, due to small errors in the build-up region calculated by eMC, varying with energy and bolus thickness, thus improving the skin dose differences. All patients' average skin doses under build-up for the eMC versus TLD's have been within 5%. A total of 29 of the 33 patient's dose results were within 3% agreement. The average difference for all patients at all measurement and TLD locations was found to be 1.9 + 1.2 % (1sd). Discussion and Conclusions: Closer investigation of the four patients where the differences between plan and measurement were between 3 and 5%, indicated these patient's eMC plans presented with steep dose gradients in the TLD region of interest. Complex customised wax created for these patients, required careful selection of TLD location based on the plan. The calculated average differences and sd for each energy were similar to the results of the average difference and sd from all patients. We conclude that the eMC algorithm implemented in Varian Eclipse has proven to be a reliable skin dose calculation algorithm for electron beams under bolus for this group of patients. The aim of this study was to quantify the ability of the Varian Eclipse Pencil Beam Convolution (PBC) algorithm in calculating dose distributions beyond metallic implants. Many patients who present for radiation therapy treatment have prothetic implants, and where possible, the treatment is planned such that the radiation beams avoid the implant. However, this is not always feasible. It is imperative to test how accurately the treatment planning software models the dose beyond these inserts. With the expectation that the treatment planning system would not be very accurate, the main aim of this study was to quantify the inadequacies of the planning system in a way that the data could be used clinically to provide a reliable dose estimate. Results: Eclipse overestimated the dose beyond the metallic insert in all cases. The over-prediction became smaller as the distance from the insert increased. Also, as expected, the over-prediction increased with thickness of the metals. For titanium, the error ranged between approximately 2 and 9% of D max . In the case of the Cobalt-Chromium, the over-prediction of dose by Eclipse was much larger, ranging from approximately 5 % to 24 %. Discussion: Predictably, Eclipse overestimated the dose beyond the metallic inserts. This was predominantly due to the fact that the maximum CT number that could be specified by the user was 3000 HU. This is an unavoidable limitation when calculating patient dose distributions, as the patient CT images usually contain too many artefacts to use the raw CT numbers. The required CT numbers were 4820 HU for titanium and 9750 HU for cobalt-chromium. The dose perturbations at the metallic interfaces were also not present in the Eclipse calculations, simply because PBC does not model them. A future version of Eclipse (ARIA version 8) will allow the user to input CT numbers up to 20,000 HU. The Eclipse Anisotropic Analytical Algorithm (AAA) also models the interface doses to an extent. Both these will be available at the Princess Alexandra Hospital in the near future and it is predicted they will greatly improve the accuracy of high density material calculations. Despite the current limitations, tables have been created that allow the medical physicist to quantify the dose over-prediction by the treatment planning software, as a function of thickness of the implant, and depth beyond the implant that can easily be used in clinical situations. Conclusions: Eclipse V6.5 using the Pencil Beam Convolution (PBC) algorithm with Equivalent TAR inhomogeneity correction was found to over-predict the dose beyond metallic implants. This over-prediction has been quantified and tabulated for titanium and a cobalt-chromium alloy. This table will be used in conjunction with the patient treatment plan to accurately calculate dose beyond the implant. Introduction: With the introduction of new technologies, the quality and success rate in curative treatment are increasing. But the quality of care depends on a chain of procedures and the final result will never be better than the quality of the weakest link. Great efforts have been made in defining quality control procedures for equipment, patient set-up, treatment reproducibility and dose delivery. But only a few studies explore the uncertainty related to medical decision, prescription, volume definition and all the clinical steps that contribute to the dose delivery process. It is an interesting question that for a patient with prostate cancer, do the treatment plans vary if the plans were done by different treatment planner? To investigate the sensitivity of the model to inter-observer variability at the planning stage, the biological responses obtained from several 3D-CRT treatment plans in the same planning system for the same patient performed by different planners were compared. Methods: CT scans obtained from a local hospital in Brisbane for a particular prostate cancer patient were used by ten different radiation technology students and each of them designed a treatment plan for the patient. The total dose prescribed for the patient is the same (70 Gy) in two phases for all 10 plans. Each plan has a prescription dose of 60 Gy in first phase, and 10 Gy in a boost phase resulting a total of 70 Gy. The differences caused by the target volume variations in interobserver plans influence the radiobiological responses for the plans. To investigate the differences in radiobiological responses to the different inter-observer plans, TCPs for prostate and NTCPs for bladder, rectum and femoral heads are calculated for all the 10 inter-observer plans. Results: The calculated TCPs for prostate, NTCPs for rectum, bladder, right femur head, and left femur head, composite NTCPs and complication free tumour control probabilities (P+) are given in Table 1 . Discussion: All the 10 inter-observer plans result with very similar TCPs. But there were differences in NTCPs for different organs, so differences arise for P+ as well. It is evident that all the TCPs were above 98 %. The composite NTCPs are quite low but quite different ranging from 6 % to 13 %. As a result there are differences among the P+ for these inter-observer treatment plans. Conclusions: TCPs can be achieved by just merely delivering high dose into the tumour volume. Inter-observer dependency of TCP is very low. But as one of the main aims of successful radiotherapy is not to harm healthy tissues, NTCPs must be as low as possible. The NTCPs are very much planner dependent. There should be a detailed protocol for contouring organs at risk volumes and should be followed. This study reflects the comment made by Cazzaniga et al 1 that each department should evaluate its own uncertainties, the statistic of which represents the quantitative criterion to be adopted to precisely define operative instructions. Introduction: With advances in technology, the amount of data generated for each radiotherapy patient has increased. To manage this large mass of data, almost all hospitals use a Record and Verify (RV) system. An RV system is a database capable of storing a wide range of information related to a patient's treatment including scheduling data, image data, treatment history, patient notes and planning data. Most RV systems retain patient setup data collected from image registration which can be accessed and processed to provide useful patient group statistics. Our motivation for accessing setup data was to inform the selection of an appropriate CTV to PTV margin size for prostate cancer patients in our fiducial marker program. Method and Materials: The Peter Mac uses Impac TM as an RV system, currently Multi-Access V 8.3. Impac's underlying Btrieve TM database contains 164 separate data tables, each of which can be accessed using a database report writing utility such as Crystal Reports. Queries were written in Crystal Reports V8.5 linking together a number of the tables to extract treatment time and setup information. The reports were processed through software developed in Visual Basic.NET TM for this purpose. The data was finally exported to a Microsoft Access TM database for further analysis Results: The Access database calculates group systematic and random errors and hence the CTV to PTV margin calculated using van Herk's margin recipe. Toxicity data is also displayed for individuals and the whole group. Treatment time information is also available, and comparison between times for the imaging process using kV equipment and MV equipment is straightforward. Conclusion: Software was developed to allow large amounts of information to be extracted readily from a commercial record and verify system. This has helped materially in the assessment of department treatment margins. Introduction: PMCC treats selected prostate patients with high dose rate brachytherapy (HDR). These patients are implanted trans-perineally with a number of flexible plastic catheters which pass into or through the prostate and receive two or three fractions of HDR treatment over two or three days. Because brachytherapy dose distributions have a rapid reduction in dose with distance from the catheters, HDR relies for its success on accurate and stable location of the implant. Unfortunately, the implant has a tendency to move inferiorly over the time the patients remain on the ward. The cause of this movement is attributed in the literature to swelling due to oedema in the region between the apex of the gland and the perineum, and yet standard nursing care for these patients is to drastically restrict their movement in bed. A study is underway at PMCC to assess whether there is any relationship between the extent of patient movement while on the ward and the movement of the implant. This required the development of a device to measure the flexure and extension of the hip joint. A new device based on a miniature three-axis accelerometer has been developed to measure patient movement while lying in bed. Accelerometers have become cheaply available due to their use in consumer goods. Most laptop computers, for example, now use accelerometers to detect motion, allowing the computer to prevent hard drive damage. The accelerometer responds to gravity as if it were a static acceleration. A method of calibration has been developed, and the application of simple trigonometry to the axis outputs of the accelerometer yields the angle between the device and the horizontal. The outputs are sampled using an analogue to digital converter connected by USB to a notebook computer. Results: Figure 1 shows the bare surface mount printed circuit board (PCB). In use the PCB will be potted in a polyester material. The device responds stably and after calibration gives repeatable readings accurate to better than two degrees. Conclusion: An electronic inclinometer has been developed which is suitable for attachment to a patient. Methods: 6 MeV and 9 MeV electron beams from a Siemens Primus linear accelerator were commissioned to deliver electron arc therapy. The percentage depth dose (PDD) and profiles for various trapezoidal apertures of the applicator were measured with a scanning water tank system (RFA300, Scanditronix Medical AB plus the Omni Pro 6 software) and a Scanditronix p-type silicon diode(EFD SN:DEB12-3035). Results: As an example, Fig. 1 shows the in-plane profiles for 6 MeV beam for three trapezoidal apertures. Note that trapezoidal aperture of the applicator is characterized only using it two short edges. The profiles were measured at the depth of dose maximum and 85cm source-to-surface distance (SSD). The trapezoidal aperture becomes progressively narrower from the negative side to the positive side of x-axis. Open and 85cm SSD(6MeV) 4cm-by-3cm and 85cm SSD(6MeV) 4cm-by-2cm and 85cm SSD(6MeV) 4cm-by-1cm and 85cm SSD(6MeV) Conclusions: Different trapezoidal apertures show a quite different variation of dose across the field. There is no single mathematical relation that can be established to describe the dose variation for all trapezoidal apertures. Therefore one needs to find a method to determine the specific trapezoidal aperture for each individual patient. Introduction: There are a large number of commercially available radiotherapy treatment planning systems. These treatment planning systems form the basis for the large majority of radiotherapy treatments performed in Australia and in many other countries. Different treatment planning systems utilise different dose calculation algorithms, resulting in minor differences between treatment plans. The recent introduction of Intensity Modulated Radiotherapy (IMRT) with inverse planning has added to the potential differences resulting between treatment planning systems. This study was undertaken to compare treatment plans generated with two commercially available treatment planning systems and to conduct an initial assessment of the possible difference in terms of radiobiological effect. Methods: The KonRad and XiO IMRT Treatment Planning systems have been compared in this study. As an initial comparison the calculated dose distributions for a sample of open beams on a flat water phantom were considered to assess any differences in the beam models. To assess the differences in the inverse planning algorithms treatment plans were generated for both systems based on a standard prescription to the target and normal tissue dose limitations. An external objective function was utilised to compare the two plans. From the resulting dose distributions the possible clinical differences were assessed with radiobiological models. Tumour control probability was determined for the target volume and normal tissue complication probabilities were determined for the spinal cord and the parotids. Results: KonRad uses a gradient descent optimisation algorithm, which was originally written by Bortfeld [1] and uses a pencil beam model for dose calculation. XiO IMRT uses a "conjugate gradient" optimisation algorithm, which is a special type of "gradient descent" optimisation algorithm. Both methods use the negative gradient of a cost function, with respect to input parameters, to systematically find the minimum of a cost function. The difference between the conjugate gradient and gradient descent methods is in the sequence of search directions that is selected during optimisation. Conjugate directions are influenced by the local gradient but usually do not follow the negative gradient. [2] . Differences were seen in the final treatment plans generated between the two systems. Differences were also seen in the tumour control probability and the normal tissue complication probabilities, primarily due to the difference in weighting between the two optimisation algorithms for the target and normal tissue volumes. Discussion: Differences between inverse treatment plans generated with different treatment planning systems is expected, however the clinical impact of this has not been assessed. The results of this initial comparison indicate that further investigation of the impact over a large patient population is warranted. The impact of these differences should also be compared with the differences in patient treatments due to other facts (e.g. contouring, patient motion and clinician prescription variations). Conclusion: KonRad and XiO have slightly different optimization algorithms, and different beam models. This results in differences in the resulting treatment plans and potentially differences in clinical effects. Further investigations are necessary to consider the impact of these differences over a patient population. External beam therapy quality assurance (QA) requires that plans are validated. An independent monitor unit calculation is an essential part of that QA process. At the Royal Adelaide Hospital a transition for the monitor unit verification was performed from "home-grown" software to a commercial one. RadCalc software is now used as an independent tool for checking dose calculations carried out by Pinnacle treatment planning system. However, electron beam calculations have only been commissioned for clinical use recently. This report describes the commissioning of RadCalc version 5.2 electron beam calculations. Data collection and entry: Before any calculation can be performed, all of the relevant treatment machine data should be input into RadCalc as per instructions. These include essential machine information such as energy, calibration details, cones, jaw setting for cones, cone factors, PDDs and etc. For custom electron cut-outs, an electron cut-out library should be established to enable the software to calculate the cut out factor automatically. To establish the cut out factor library, square fields from 3x3 cm 2 to the maximum cone size and circular fields from 3 cm diameter have been measured at 100, 105, 110 and 115 cm SSD and data have been entered into the software. For selected machine, electron energy, and electron cone, the value of the effective SSD (in cm) has been measured and entered into the software as well. Dose calculations were carried out for a range of square fields, rectangular fields including extreme elongated fields and irregular fields to check the accuracy of the algorithms by comparing the out put factor calculated by RadCalc and measured values. In RadCalc both the Square Root and Sector Integration methods have been used for calculations. The results of the commissioning tests showed that the RadCalc calculated cut-out factors for square fields agree with the measured data within 1% for 100 cm SSD. The agreement is with 1.5% for extended SSDs. For rectangular fields, the agreement is generally within 3%, however for extremely elongated fields especially if one side is less than 4 cm, differences up to 12% have been observed, especially for lower energies. For irregular field, RadCalc calculated output factors agreed with the measured data within 3%. Discussion: Since the square root method in RadCalc uses interpolation to calculate the cut-out factors, the calculation accuracy depends on the square field data entered into the software. The sector integration method uses circular cut-outs to calculate each sector. The calculation accuracy depends on the amount of circular field data entered into the software. For rectangular fields, RadCalc uses formula to calculate cut-out factors. The uncertainty increases for larger X/Y ratio. It is not possible to make the calculation more accurate by adding more rectangular fields into the database because the calculation algorithm does not use the rectangular fields for the calculation. We advise that for elongated field/rectangular fields, first choice is to select a factor from the measurement database (Get from table option) , and if no data is available, then square root method should be used for calculation. For irregular fields both algorithms should give similar results. The calculation accuracy could be improved by using curve fitting for the measured square and circular fields instead of liner interpolation. Conclusions: RadCalc can be used as independent monitor unit calculator for electron beams provided user inputs enough data to the database. Introduction: The skin is a complex and multi-layered organ that serves many functions critical to the body, such as providing protection against mechanical trauma. The mechanical properties of skin are nonlinear, anisotropic and viscoelastic. This is mainly due to the microstructural architecture and mechanical properties of its individual components. Collagen is the main load-bearing component in the skin, hence its structure and distribution within the dermis play an important role in the overall mechanical behaviour of skin tissue. Structurally-based constitutive relations are therefore useful in providing insight to the critical link between macroscopic mechanical response and tissue structure. The structurally-based constitutive law used in this study is based upon the work performed by Lanir (J. Biomech., 16: [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] 1983) . In this modeling framework, families of collagen fibres are embedded within a volume block of gel matrix. Each fibre is assumed to be undulated and orientated at a particular angle within the tissue. The fibres can only resist tensile loading and undergo a uniaxial strain when stretched, which is representative of the macroscopic tissue strain along the fibre direction. Statistical distributions are used to represent the distributions of collagen fibre orientation and degree of waviness. Other model parameters include collagen density, fibre stiffness and properties associated with the isotropic gel matrix. A finite element mesh was created to represent a piece of circularly-shaped skin tissue undergoing multiaxial deformation. The mesh consists of 16 elements which are interpolated using bicubic Hermite basis functions and the mechanical response was computed using finite deformation elasticity. The modeling work is intended to be used in conjunction with mechanical experiments that are performed on pig skin tissue using a multiaxial rig. The multiaxial rig enables forces to be applied radially at each of the twelve displacement actuators attached to the boundary of the skin tissue, while positions of material points are tracked between the deformation states. Data obtained from such mechanical tests feed into an optimization procedure to estimate the constitutive parameters. Results: A model parameter study has been performed to show the effects of individual constitutive properties by simulating different structural scenarios of the collagen network. It has been shown that the nonlinear behaviour commonly observed for collagenous tissues is mainly due to fibre undulations. We will report on the estimated model parameter values obtained through performing mechanical testing and mathematical optimization. Conclusions: A modelling framework that incorporates the influence of individual structural dermal components on the overall macroscopic mechanical response of skin has been developed. We aim to incorporate structural data from histology studies in the future. Introduction: Like the heart, there is omnipresent electrical activity known as slow waves in the muscular layers of the stomach. Slow waves originate from specialised pacemaker cells called the interstitial cells of Cajal (ICC), which are situated between and within the gastric smooth muscle (SM) layers. Slow waves depolarise smooth muscle cells, which are required for contractions. In the normal stomach, slow wave activity occurs at a frequency of 3 cycles-per-minute (cpm). Dysrhythmic slow wave activity decreases gastric motility and leads to motility related disorders such as gastroparesis. Gastric electrical stimulation (GES) entrains slow wave activity through external electrical stimulation. However, identification of the ideal stimulation parameters to achieve a desired outcome is an ongoing challenge. We propose computational simulation as a new strategy to identify which parameters are likely to achieve ideal results. A gastric tissue model was developed, firstly showing the normal gastric electrical activity (GEA) and then the effects of applying a 9 cpm stimulus to the tissue. Methods: A gastric tissue model was developed to represent a portion of the stomach as shown in Figure 1a . The model consists of two electrically coupled continuum domains. The ICC domain (Figure 1b ) was described by a cellular automation algorithm providing the pacemaker current (IICC) to the SM domain (Figure 1c ). The SM domain was described using a biophysically based cellular model of gastric smooth muscle cells. Each domain was solved over a 100 x 100 mm 2D layer using the Finite Difference Method (FDM). The conduction velocity in the material direction (the y-direction) was set to 22 mms -1 , and the conduction velocity in the direction perpendicular to the muscle fibres (the x-direction) was set to 11 mms -1 . The simulated normal slow wave originated from the location representing the pacemaker region as shown in Figure 2a . The slow wave activity propagated from the right to left in the SM layer, which was consistent with the downward direction of the stomach in Figure 1a . The duration of each cycle of depolarisation was 20 s. The stimulus (Fig 2b) produced retrograde activity which gradually reversed the direction of the slow waves in the SM layer by 35 s. The frequency of the slow wave activity in the SM layer was elevated to 4.5 cpm by the stimulus. Discussion: A gastric tissue model was developed and the simulated results reproduced known propagation velocities, direction and frequency of normal slow waves. Furthermore, retrograde slow wave activity induced via a 9 cpm stimulus was entrained at 4.5 cpm, consistent with experimental observations. Conclusions: The computational model presented above is capable of replicating a variety of gastric electrical activity and provides an initial platform to test some of the effects of various gastric electrical stimulation protocols. will benefit from parallel processing in the sensor network. Scheme 4 utilizes a DSP computational layer to further enhance the system's performance. The DSP is capable of processing the algorithm at a very high speed. Although it draws large running current, the operational time of the DSP after the SPI clocking is 22ms (100x performance gain). In practical systems, assuming the 4-second epoch measurements are sent every 30 seconds using Scheme 2, the expected battery life of the sensor node is approximately 40.45 hours. Conclusions: This paper has shown the possible digital signal-processing schemes implementable in an OBN. Each scheme has an associated power profile that indicates the total power cost in acquiring and processing the signal. Future work is being carried out in DSP hardware optimizations and the development of effective DSP algorithms to extract vital sign parameters in an OBN. Introduction: Bone allografting is a technique that can be used to assist in the treatment of cancer, osteomyelitis, and in the revision of prostheses. Postoperative infection is a major issue in the use of bone allografts, with postoperative infection rates of up to 13% 1 , often resulting in the need for further surgery or amputation. Iontophoresis is the movement of ions across a membrane using an externally applied electric potential. Historically, iontophoresis has been used almost exclusively as a trans-dermal drug delivery system. Using iontophoresis to relatively rapidly saturate cortical bone allografts with antibiotics in the operating theatre setting (prior to their use in surgery) has the potential to reduce the incidence of postoperative infections 2 . This study intends to use iontophoresis to saturate a sample of cortical bone with a radio labelled solution of Iodine-125. Autoradiography will then be used to image the bone to illustrate the exact location of antibiotic within the sample. Micro computed tomography images and scanning electron micrographs of the bone sample will be used to highlight the bone microstructure in the sample, and relate this to the resultant location of the antibiotic. This work will utilise a custom-built iontophoresis cell that has already been developed and tested. Methods: A customised iontophoresis cell was constructed so that a solution of charged ions could be visually recorded as it diffused through a cylindrical sample of cortical ovine bone. The biological stain methylene blue was used to demonstrate the diffusion process, as it is highly visible and has similar electrical properties to the antibiotic gentamicin. The solution of methylene blue was placed in the medullary region of the sample. At the centre of this region was placed a positive electrode comprised of stainless steel. The bone was immersed in a saline solution contained in a plastic cylinder, the inside of which was lined with a cylindrical shaped mesh stainless steel electrode. The fluid in the centre of the sample was separated hydrostatically from the outer solution by sealing the upper and lower bone edges with Perspex and Ethylene Vinyl Acetate foam respectively, and clamping the cell externally. When a potential difference of 90 volts was applied between the electrodes, the diffusion of charged methylene blue molecules from the endosteal to the periosteal surface was facilitated, so the whole bone sample became permeated with the dye. It was hypothesised that gentamicin would undergo a similar process of facilitated diffusion in human bone. Results: A custom cell has been constructed that can be used to visually record the iontopheric diffusion of a solution of charged ions through cortical bone. Methylene blue has been used to illustrate the facilitated diffusion process, and the diffusion front of the solution has been recorded as it diffuses through the bone sample. Currently, in order to study the location and diffusion of gentamicin, a labelling technique must be used so that the antibiotic can be seen, either by radioactive or other means. If the diffusion through allograft bone of methylene blue and gentamicin can be shown to be adequately similar, future studies involving various physical properties of gentamicin could be carried out using methylene blue. This result would also validate work done previously 2 in the area of bone allograft research. The various imaging techniques employed during this research will also add to the current level of understanding of where exactly gentamicin exists inside a bone allograft and the manner in which it diffuses in and out of bone. Introduction: Lapses are brief (< 15 s) complete losses of ability to respond during a task. On discrete tasks, such as the psychomotor vigilance task, they have been arbitrarily defined as having occurred when reaction time > 0.5 s. However, on continuous tracking tasks one can use tracking information before and after lapses to detect the onset and end of lapses with much higher temporal accuracy than possible with discrete tasks. We have developed a novel 2-D visuomotor pursuit tracking task to reliably detect lapses with sub-second temporal resolution. Accurate behavioural detection of lapses is a vital component in our current simultaneous-fMRI+EEG study aimed at determining the spatiotemporal dynamics of lapses. The target in our tracking task moves in a pseudo-random 2-D path, defined in both directions by sums of equalfrequency-spaced sinewaves up to 0.25 Hz, all with random phases, a period of T = 30 s (Fig. 1a) , and a resultant speed of 63 pixels/s -i.e., no flat-spots in the target motion (Fig. 1b) .. The task software provides ms-scale synchronization with external computer-based devices. A 10-min (20-cycle) tracking session was run to document baseline tracking behaviour. Ten healthy right-handed volunteers (5 males, 5 females; aged 22-55 y, mean = 31.6 y) were presented with a yellow disc (d = 18 pixels), representing the target, and a red disc (d = 15 pixels), representing the joystick response, on a computer screen (338 x 270 mm, 1024 x 768 resolution, refresh rate 60 Hz) with an eye-screen distance of 120 cm. Subjects were instructed to move the joystick so that the response disc was as close as possible to the moving target at all times except for (1) prediction trials in which the target disc disappeared and the subject had to predict the path of the target or (2) nonresponsiveness trials in which the response disc stopped and the subject had to stop tracking as soon as possible. The joystick response was sampled at 60 Hz. Tracking error was measured as the euclidian distance between the centres of the target and response discs. Results: The grand average response error during non-responsive trials demonstrated that the response error increases sufficiently (~80 pixels) within 0.5 s of becoming non-responsive (Fig. 1c) . The task is sufficiently unpredictable that individuals must attend to the target at all times to achieve alert performance. Our results show that our novel 2-D task can reliably detect lapses of 250 ms. In conclusion, measures of responsiveness on our 2-D tracking task, combined with eye-video and VEOG will be invaluable in our high-temporalresolution identification and investigation of microsleeps and attention lapses. Division of Microelectronics and Biotechnology, Institute of Electronics, Silesian University of Technology, Gliwice, Poland Introduction: The heart rhythm modulation phenomenon often called as the heart rate variability, due to respiratory motions is widely described in the literature. Changes of the heart rate are not appearing only due to respiratory motions but undergone many control processes from the autonomic nervous system. However, in most cases many researchers consider respiratory motions as the phenomenon most influencing the heart. Respiratory artifacts are considered to be one of disturbing factors influencing both the ECG and the EGG. Unfortunately both useful component of the EGG and respiratory artifacts are localized in the same range of frequency. Classical approach to mentioned disturbances removal based on proper filtration leads to artifacts arise in the useful EGG signal as well as to the loss of the part of the important information included in that signal. A new method sorting out mentioned problems is suggested in the presented paper. Methods: EGG signals have been registered with the help of four channels amplifier which can be characterized by the set of the following parameters: frequency range from 0.015 to 50 Hz, gain k = 2500 or k = 5000, amplitude resolution equals to 12 bits, sampling frequency equals to 200 Hz per channel and finally useful signal amplitude range equals to ±1mV. The signal registration process applied standard electrodes configuration according to standard. For respiratory motion the special triple-axis acceleration sensor type MMA 7260Q manufactured by Freescale company has been placed near electrodes with parameters as follows: number of axis 3(XYZ), sensitivity 800mV/g, measurement range ±1.5g. Both the acceleration sensor signal (ACS) and the EGG signal have been registered synchronously. Only one component (AY) has been applied in the examination process due to the fact that other components in the perpendicular surfaces to the direction of the chest motion did not include visible component correlated with respiratory motions. signals lasting from 20 to 120 minutes have been registered during the examination process. Relatively high sampling frequency (i.e. 200 Hz per channel) allowed to further analysis of the ECG signal as well. The changes of distance between R peaks as well as first derivative of RR distances have been calculated in the presented work. The filtered EGG signal and several respiratory dependent components have been obtained as a result of presented data processing. On Fig.1 . the comparison of reconstructed respiratory dependent data has been presented. It seems that the influence of the respiratory movements may be noticed in all presented signals except the ECG. The main agents which caused the EGG signal disturbances are artifacts connected with the respiratory movements. Several methods of reconstruction signals related to respiratory movements have been presented in this paper. Based on presented analysis the following observation may be concluded that the reconstruction of respiratory related signals based on analysis of the amplitude of R wave in the ECG signal highly correlates with registered respiratory movements. The obtained results are comparable with signals obtained with the help of band pass filtering. In the research process, relatively high influence of baseline interpolation method for changes of the ECG T waves amplitude has been observed. Relatively low values of the correlation coefficient between the ACS signal and the reconstructed amplitude of T waves may be caused by improper selection of baseline interpolation method. The selection of the baseline interpolation method may require further examinations. Simple comparison between respiratory signals obtained by R wave detection as well as by the heart rate variability analysis to the signal from the acceleration sensor allows to conclude finally that the presented methods recover well the RESPIRO signal. Introduction: Radiation protection eyewear (RPE) is used to protect the eyes from scattered radiation during fluoroscopic procedures and is worn by interventional radiologists or cardiologists, to reduce posterior subcapsular cataract formation. These are typically made from glass impregnated with high atomic number elemental lead and / or barium. Motivation for this study was initiated when a few interventional radiologists received greater than 1.5mSv (Sir Charles Gairdner Hospital lowest collar dose trigger level) in their monthly collar film badge measurements. The aims of this study were to determine (i) the effectiveness of RPE in attenuating scattered radiation and (ii) if the scattered radiation doses measured at collar badges position reflected doses received at the eyes. Methods: In this study, the Aegis ED2 (John Caunt Scientific Limited, Oxford, UK) solid state (probe) dosimeter was used to measure scattered radiation doses. The physical dimensions of the Aegis ED2 probe are 1x3cm, appropriate for placement on the frame of the RPE for dose measurements. In the first phase of this investigation, unattenuated and attenuated scattered radiation were measured using a Toshiba fluoroscopic system (Toshiba Ultimax, Toshiba Medical Systems Corporation, Tokyo, Japan), with a PEP phantom simulating a patient. To investigate the attenuating capabilities of these RPE, measurements of unattenuated and attenuated scattered radiation doses were made on five RPE; using X-ray tube voltages ranging between 70 and 120kVp, with a conservative tube current of 4mA. The RPE was set up using a configuration simulating a typical fluoroscopic screening procedure. In the second phase, these Aegis probes were attached to an interventional radiologist to measure the scattered radiation he would receive during interventional procedures. One dosimeter probe was attached to the bridge of the RPE (forehead) and the other to the collar badge, worn at shoulder level. Dose measurements were recorded for ten different interventional procedures. Dose area product (DAP) and screen times were simultaneously documented from each procedure. The variability and complexity of the interventional procedures involved made comparison between absolute doses at the forehead less meaningful. In this respect, we have used the normalised averaged dose per screen time as an estimate of the attenuated scattered radiation dose to the eye of the interventional radiologist as a possible index. Results: Results from initial measurements were encouraging with all five RPE tested not exceeding 7% transmission of scattered radiation. There was variability in screen times and complexity of the ten different interventional procedures involved; DAP ranged from 41.3 to 23837.0cGycm 2 (mean of 8861.1 cGycm 2 ); screen time ranged from 0.3 to 20.9 minutes (11.8 minutes). The correlation of scattered radiation doses at the collar badge and at forehead was high (R 2 =0.96), though differences (at film badge -at forehead) varied between -3 to 16.9 Sv (3.83 Sv). Correlations between DAP & dose at collar badge and DAP & dose at forehead were 0.93 and 0.96 respectively. Unattenuated scattered radiation dose to the forehead ranged between 0 and 39.4 Sv (14.8 Sv). Unattenuated scattered radiation dose to the collar badge ranged between 0.2 and 49.8 Sv (18.7 Sv). Conclusions: It is evident from results of the first phase measurements that RPE protect the eyes from scattered radiation; potentially attenuating 93% of the incident scattered radiation. The scattered radiation measured at the forehead and collar badge as a function of procedure were highly correlated though different. The averaged dose per screen time was 0.74 Sv per minute. It is recognised that this pilot study involved only one interventional radiologist, and we hope to extend this study to involve more interventional radiologists and procedures to verify the reliability of the pilot outcomes. Introduction: Gafchromic XR-RV2 is a radiochromic film which supersedes the previous XR-R model. The film was investigated to determine the characteristics of this revised film. Methods: Film was exposed on high frequency general radiographic units and air kerma was measured using a 15cc ion chamber and electrometer system calibrated by the National Radiation Laboratory. Film measurements were made using an Epson Expression 10000XL or X-rite spot densitometer and analysed using ImageJ software. Results: The film has a sensitive dose range of 1cGy -400cGy (Fig 1) . At increasing doses the sensitivity decreases as the film saturates. There is also notable energy dependence to this film (Fig 2) . Post exposure growth was found to be ~ 4% for film stored in a dark environment over a period of 78 days post exposure to 50cGy. A strong polarisation effect was observed and concluded to originate from the radiochromic dye layer. Discussion: It can be difficult to compare this film to the previous XR-R model owing to different methods used between investigators. However the characteristics were very similar. Polarisation effects can be a hindrance to accurate dosimetry, however it can also be used to improve the sensitivity of the film with some scanners. Conclusions: Gafchromic XR-RV2 has similar characteristics as the obsolete XR-R. It has a wide dose range a relatively small post exposure growth. Methodology for dosimetry with XR-R should be applicable to XR-RV2. Introduction: Concern for radiation dose to the breast as a result of Computed Tomography (CT) examinations has led to many radiological practices implementing the use of bismuth breast shields. Publications state the reduction in dose to the breasts, from the use of a bismuth breast shield, is from 30% to 50% 1,2 . However there is little information on the effects of imaging protocol, with respect to dose modulation techniques for different manufacturers, on breast dose. Current CT scanners utilise dose modulation because there are dose and image noise benefits. Consequently the aim of this study was to compare the effects on breast dose and image quality of different methods of imaging with and without dose modulation, utilising bismuth shields and to find the optimum method of applying this dose reduction technique. Methods: Breast dose measurements were made using thermoluminescent dosimeters (TLD 100s) inserted into the right breast of a set of right and left breast attachments (size DD, 1250 mL) purchased for and attached to a male RANDO phantom. All the measurements were performed on a Philips' 64 slice Brilliance multidetector CT. CT scanner manufacturers utilise various methods for performing dose modulation (DOM). This Philips CT scanner has "DoseRight -Automatic Current Selection (ACS)" as a protocol specific automatic method for setting uniform noise levels, with modulation options of Z DOM or D DOM. Z DOM is dose modulation in the z (or longitudinal) direction. Based on the scout or surview scan, the mA can vary along this axis to achieve uniform noise along the length of the body part scanned. D DOM is dose modulation that, based on the previous image rotation, varies the mA dynamically ('on the fly'), as the tube rotates through different body angles adjusting the mA for the body part eccentricities. The imaging protocol used the following factors; 120kVp, 64 x 0,625mm detector, 0.75 second tube rotation, pitch 0.984 with 1mm slice thickness reconstructed. Measurements were made to determine the difference in Mean Glandular Dose (MGD) if the scout is performed with the bismuth breast shield in place and to investigate the resulting image quality implications. Ideally, the majority of reduction in image quality (increase in noise and streaking) will occur in the breast region. The results are summarised in Table 1 and Table 2 . Applying the breast shield prior to the scout scan reduces the MGD by 18% -11% for shallow to deeper tissue depths respectively. This compares to a dose reduction of 40% -34% for shallow to deeper tissue depths, when the breast shield is applied following the scout scan. The distribution of dose measurements through the breast shows the greatest reduction in breast dose is for shallow breast tissue. The results show a large difference in reduction of MGD for the different dose modulation techniques and for imaging with or without the breast shield in place for the scout scans. This reduction varied from 13% to 45% which is in keeping with the values quoted in the literature 1, 2 . For Rando, with DD sized breasts, the optimum dose modulation technique for this Philips scanner is D DOM without the shield in place for the scout scan. For a male Rando without breast attachments, the Dose Length Product (DLP) measurements indicate that the reduction could be significantly different, although it is likely that D DOM will provide the best dose reduction, while minimising increase in image noise. This is because lateral x-ray projections which do not pass through the breast (and therefore contribute significantly less to MGD) will have a higher mA and therefore result in lower image noise, while those that do pass through the breast will have a lower mA. Consequently, future investigations required include measuring the reduction in MGD for smaller breast sizes. The bismuth breast shield is beneficial in reducing breast MGD in thoracic imaging. In this study utilizing large breast phantoms D DOM is the dose modulation which will optimise dose reduction for minimum loss in image quality for this Philips CT scanner. The scout or surview scan should be performed without the bismuth breast shield. Conclusions: Preliminary results with simulations and with physical phantoms indicate that GUISE has considerable potential in MRA. The first results from a study of human volunteers receiving a single bolus of gadolinium contrast agent are expected in the third quarter of 2008. Physics Department, University of Canterbury, Christchurch, New Zealand Introduction: Mammography refers to the x-ray examination used for the human breast to diagnose breast disease. The female breast is a radiosensitive organ and when using the x-ray for examination, there is a small but significant risk of radiation induced carcinogenesis. The most radiosensitive part of the breast structure is the mammary glands, as these are considered as the place that the breast cancer frequently occurs. The estimation of the absorbed dose by the breast is an important part for the quality control program of the mammography. Therefore, accurate assessment of the surface exposure levels is considered a first step. Additionally, the relationship between the surface exposure and the absorbed dose to tissue as a function of depth is also important. The breast surface exposure is typically translated into a Mean Glandular Dose (MGD) to access the radiation risk within the mammary glands. Method: Even with the International Commission on Radiological Protection (ICRP, 1986), and other protocol agreements of using MGD to measure the risk of exposure to the breast, measuring MGD directly is currently impossible because the glands are inside the breast. There are two methods of measuring the MGD: using the breast phantom or indirect measurements from the surface exposure to the patient. In this research the breast phantom is used, enabling us to have wide range of breast thicknesses, which is one of the main factors that affect the MGD value. The phantom mixtures (compositions) will be used is the standard phantom which is, 50 glandular tissues/ 50 fat tissues. By using the phantom, the effect of different thicknesses on the MGD value will also be tested; the phantom will be on slabs (rectangular) with different thicknesses, and the maximum thickness for the phantom is (7.5 cm). Holes will be milled (on one of the phantom slabs) to place the TLDs. The TLDs are used due to their tissue equivalent properties, high sensitivity, energy and dose response, and small size. In this research different mammographic x-ray tubes are used in terms of target/filter combination; Molybdenum/ Molybdenum (Mo/Mo), and Molybdenum/Rhodium (Mo/Rh) under KVp range 26, 28, and 32. Results: Currently, the set-up of the phantom and measurements is starting in June 2008 at Christchurch Hospital. The expectation from these measurements is to find the MGD at different depth in the breast not just at a fixed depth (standard depth) by applying a range of KVp using different x-ray tubes target/filter materials. These measurements will be used to estimate the radiation dose values that are necessary for quality control and optimization. Discussion: Of the results will be presented. this purpose. The tube furnace operates on the principle that if the target material can be vapourised, and the vapour blown down a tube subject to a temperature gradient, the 96 Tc will plate out onto the walls of the tube at a different location to the other components of the target. Methods: The 96 Tc was produced in the cyclotron by bombarding a 1 cm diameter, 0.5 mm thick natural molybdenum target with 18 MeV protons. Because Mo has a very high melting point (2617qC), it is necessary to first oxidize the target before it can be vapourised (MP MoO 3 795qC). Oxidation was achieved by dissolving the molybdenum in 6M nitric acid and then calcining it to dryness to produce MoO 3 . Because it is not practical to irradiate Mo in a cyclotron every time the furnace is used, experiments were performed with MoO 3 spiked with 99m Tc, or 99m Tc alone. The spiked MoO 3 or 99m Tc was placed in a small ceramic boat which was loaded into the tube furnace. The tube furnace consisted of a 110 cm long quartz glass tube surrounded by heating elements embedded in ceramic, which was in turn wrapped in insulating material. A 25 cm long removable end-piece, which protruded beyond the heating elements and insulating material, was attached to the cool end of the tube via a ground glass bayonet joint. Because the end-piece is not heated or insulted, vapour entering the end-piece cools rapidly to room temperature. The tube was heated by three separate heating elements, each having a maximum temperature setting. These were initially set to 850qC, 650qC, and 500qC, producing a temperature gradient ranging from 700qC at the hot end of the tube down to 350qC, with a maximum temperature of approximately 850qC 15 cm from the hot end of the tube and the optimal position for ceramic boat. The heating elements were controlled such that when the furnace was switched on, the last two elements (650qC, and 500qC) began heating. Once they reached their target temperature, the last element was activated. Oxygen was blown through the tube at 60 cc min -1 . The temperature at the hot end is sufficient to melt the target material and release both the MoO 3 and the technetium oxide (Tc 2 O 7 ). The vapour encouraged to flow with the introduction of the oxygen along the tube, and the MoO 3 and Tc 2 O 7 plate out along the walls once the temperature is below each elements specific condensation point. MoO 3 vapour will condense back to a solid at a higher temperature than Tc 2 O 7 and by varying the temperature gradient appropriately, the MoO 3 should plate out earlier than the Tc 2 O 7 which may be gathered in the endpiece. This can then be removed and rinsed with saline to remove the Tc 2 O 7. Results: The target was dissolved and calcined successfully, with particular attention paid to the concentration of the nitric acid used. It was found that 6 M fully dissolved the target in 5 minutes at 100qC, although increasing the concentration to 11 M slowed the reaction considerably, presumably due to the oxidation of the surface creating a barrier. It was found to be necessary to remove the oxide layer before the Mo could be dissolved; this was achieved by placing it in 7M NH 4 OH. When the MoO 3 was heated in the furnace it plated out as crystalline deposits in the region of the tube which corresponded to a temperature range of approximately 800qC to 650qC. By measuring the activity along the tube with a collimated contamination monitor, the region of technetium deposition was measured and this was found to be at the end of the tube (350q). Subsequent runs with the temperature of the last element raised to 525qC resulted in the technetium depositing in the removable end-piece as desired. Discussion: The system worked as planned, although the MoO 3 plated out earlier than expected. MoO 3 should plate out in the temperature range of at 500qC to 600qC, and the fact that it plated out in a supposedly hotter region of the tube suggests temperatures within the tube may actually be lower than preliminary measurements indicated. Conclusions: A tube furnace that can extract technetium from a natural molybdenum target has been successfully constructed. Work will continue to optimise the efficiency of this device. PACS. The package can be configured (Fig.1) to emulate all the main components of a PACS (ImageServer, RISServer, etc) and perform most of the common services (Storage SCU/SCP, Worklist SCU/SCP, Query/Retrieve, Move, etc). Communication log files are generated with useful details of requests and responses, image file headers, worklist entries errors, etc. Wireshark is a network protocol analyzer, which gives details of all network traffic and can be filtered for specific source or destination IP's and communication protocols. DICOM handshaking and data transmission can be monitored and log files generated. When using these tools, it is important to have the supplier's permission and to have appropriate security measures in place; the applications themselves use standard communications and should not cause any security issues. Results: When communication issues arise during the commissioning of a modality on a PACS, there is often little error information immediately available. To identify the issue, log files have to be accessed by a remote PACS support or the supplier's service centre. It is often necessary to have several attempts at communications with different DICOM settings before the issue can be resolved. A few cycles of changing settings, accessing logs and waiting for feedback can result in long delays at a time-critical point in commissioning. Using these tools, it is possible to resolve DICOM communication issues such as misidentified AE Titles or unusual reasons for Association Rejections. For example, one of our recently commissioned modalities required a ScheduleStationAETitle for worklist entries. With JDICOM it is possible to configure worklist entries and test for this situation. Another example is of a modality with incomplete licensing that used a Port number which was different from the published one; Wireshark reported the wrong DICOM Port and the licensing issue was resolved. Discussion: JDICOM and Wireshark are relatively straightforward to install and configure, and provide a useful window on DICOM communications between PACS modalities. The applications both generate detailed logs of all transactions and enable a step-by-step approach to troubleshooting. They are also a useful educational tool since the configuration and testing process requires some understanding of the DICOM protocol but it is approached through application rather than through documentation. The tools discussed here have assisted in resolving some minor PACS communications issues. They also assist in the greater understanding of the DICOM protocol and the education of PACS administrators. This increased knowledge base is useful when dealing with suppliers and adds to the skills of the PACS administrator. University of Waikato, Hamilton, New Zealand Introduction: Therapy using pure beta-emitter allows for an optimized dose deposition but the absence of direct photon emission complicates the task of dosimetry. Bremsstrahlung imaging can be used for dosimetry in therapies using betaemitting radionuclides such as 90 Y. The distribution of bremsstrahlung yield in water can be obtained using Monte Carlo techniques, however it consumes a long computing time as bremsstrahlung yield in water is low. The aim of this work was to increase simulation time by acquiring a photon source kernel for 90 Y to substitute for a beta source. Methods: This study used Geant4 Monte Carlo code to simulate isotropic emissions of a 90 Y point source and tracked the transport of electrons and bremsstrahlung photons in water. For any interactions that produced bremsstrahlung photons, its energy, position and emission angle were recorded. These results were then used to determine the photon kernels. The 90 Y beta decay and bremsstrahlung spectrum was obtained and compared to previous studies. They compared well. The results showed that the bremsstrahlung yield was low and the bremsstrahlung energy decreased further away from the decay site. Discussion: The photon kernel is a close estimation to 90 Y, and it can be used to reduce simulation time. The results of this study determined the energy spectrum and radial distribution of bremsstrahlung photons generated by beta emission in water. The use of photon kernel can increase simulation speed for electron transport and it is a practical approach to investigate bremsstrahlung imaging for radionuclide therapy. Evaluation of on-board kV cone beam CT (CBCT)-based dose calculation A study on adaptive IMRT treatment planning using kV cone-beam CT Correction of conebeam CT values using a planning CT for derivation of the "dose of the day Testing of the analytical anisotropic algorithm for photon dose calculation Issues associated with clinical implementation of Monte Carlo-based photon and electon external beam treatment planning On dose distribution comparison Accurate dosimetry with Gafchromic EBT film of a 6 MV photon beam in water: What level is achievable? Radiation detection devices made from CVD diamond. Semiconductor Science and Technology Deep Levels in CVD Diamond and Their Influence on the Electronic Properties of Diamond-Based Radiation Sensors. physica status solidi (a) CVD diamond detectors as dosimeters for radiotherapy Dosimetric characterization of CVD diamonds in photon, electron and proton beams Evaluation of diamond radiation dosemeters General requirements for the competence of testing and calibration laboratories References: 1. Calibration of photon and beta ray sources used in brachytherapy Comparison of air kerma standards of LNE-LNHB and NPL for 192 Ir brachytherapy sources: EUROMET project no 814 On the accuracy of techniques for obtaining the calibration coefficient N K of 192 Ir HDR brachytherapy sources Dosimetry of interstitial brachytherapy sources: Recommendations of the AAPM Radiation Therapy Committee Task Group No. 43 In-vivo dosimetry for gynaecological brachytherapy: Physical and clinical considerations Verification of the plan dosimetry for high dose rate brachytherapy using metal-oxide-semiconductor field effect transistor detectors AAPM Task Group 108: PET and PET/CT Shielding Requirements iRobot Corporation, iRobot® Scooba® Floor Washing Robots Radiation Protection Data Handbook Structural shielding design for medical X-ray imaging facilities. National Council on Radiation Protection and Measurements Radiation Shielding for Diagnostic X-rays: Report of a Joint BIR/IPEM Working Party AAPM Task Group 108: PET and PET/CT shielding requirements Radiation protection standards: their evolution from science to philosophy Dosimetric evaluation of a 2D pixel ionization chamber for implementation in clinical routine Comparative evaluation of Kodak EDR2 and XV2 films for verification of intensity modulated radiation therapy Statistical methods for assessing agreement between two methods of clinical measurement Measuring agreement in method comparison studies Dosimetric IMRT verification with a flat-panel EPID Acceptance tests and quality control (QC) procedures for the clinical implementation of intensity modulated radiotherapy (IMRT) using inverse planning and sliding windows technique: experience from five radiotherapy departments Effect of scatter material on detector performance for in vivo dosimetry Clinical application of the OneDose TM Patient Dosimetry System for total body irradiation Effects on skin dose from unwanted air gaps under bolus in photon beam radiotherapy The influence of air cavities on interface doses for photon beams Dose perturbation by air cavities in megavoltage photon beams: Implications for cavity surface doses Radiotherapy x-ray dose distribution beyond air cavities Use of an amorphous silicon electronic portal imaging device for multileaf collimator quality control and calibration Use of EPID for leaf position accuracy QA of dynamic multi-leaf collimator (DMLC) treatment An Electronic Portal Imaging Device as a Physics Tool The Effect of Intensity-Modulated Radiotherapy On Radiation-Induced Secondary Malignancies The role of nonelastic reactions in absorbed dose distributions from therapeutic proton beams in different medium A dosimetric evaluation of water equivalent phantoms for kilovoltage x-ray beams Output factor measurements for a kilovoltage X-ray therapy unit Evaluation of the water equivalence of solid phantoms using gamma ray transmission measurements Absorbed Dose Determination in External Beam Radiotherapy, An International Code of Practice for Dosimetry Based on Standards of Absorbed Dose to Water Mass-energy absorption coefficient and backscatter factor ratios for kilovoltage x-ray beams A table of phantom scatter factors of photon beams as a function of the quality index and field size Monitor Unit Calculation For High Energy Photon Beams BEAM: a Monte Carlo code to simulate radiotherapy treatment units Monte Carlo calculation of nine megavoltage photon beam spectra using the BEAM code Using Monte Carlo simulations to commission photon beam output factors--a feasibility study An investigation of accelerator head scatter and output factor in air Evaluation and Clinical Implementation of the Eclipse Electron Monte Carlo Dose Calculation Algorithm The nth root percent depth dose method for calculating monitor units for irregularly shaped electron fields Issues associated with clinical implementation of Monte Carlo-based photon and electron external beam treatment planning BEAM: A Monte Carlo code to simulate radiotherapy treatment units Monte Carlo Modelling of a Varian 21EX Clinac 6 MeV Electron Beam with EGS4/Beamnrc Sensitivity of large-field electron beams to variations in a Monte Carlo accelerator model Monte Carlo simulation of large electron fields Monte Carlo simulation of large electron fields THE VARIAN ECLIPSE TREATMENT PLANNING SOFTWARE WHEN CALCULATING DOSE BEYOND METALLIC IMPLANTS Southern Area Radiation Oncology Services New Zealand Introduction: To deliver more uniform dose to the post-mastectomy chest wall, the aperture of an electron arc applicator was shaped into a trapezoid Methods of image reconstruction from projections applied to conformation radiotherapy XiO Intensity Modulated Radiation Therapy (IMRT) -Technical Reference and Svetlana Sjostedt Australia 2 School of physics and chemistry Seminars in Ultrasound, CT, and MRI Proposal for a gamma-emitting stent for the prevention and treatment of restenosis of coronary arteries Production of technetium-96 in a standard medical cyclotron', presentation at EPSM A system of 99MTc production based on distributed electron accelerators and thermal separation Acknowledgements: We would like to thank the radiotherapy centres for allowing us to participate in their local dosimetry comparisons. Acknowledgements: Many thanks to Cameron Storm (Radiation Health, WA Department of Health) for counting the wipe tests.Acknowledements: Nigel Attwood from CMS Alphatech, MEDTEC and Sicel Technologies for the donation of the OneDose TM dosimeters and reader. The EPID provides a feasible method for routine QA checks. This project is made possible by the generous funding from the Western Australian Retinitis Pigmentosa Foundation. in the conventional manner, the actual dose to this individual will be 2.5 times higher than the allocated design dose limit. The authors have compared four different paradigms for shielding in a typical situation in terms of the actual dose likely to be received by a representative individual (taking into account sequential exposures) and the cost of shielding: 1. Shielding using the occupancy factors and methods outlined in NCRP Report 147 and a 1 mSv per annum design dose limit. 2. Shielding using the occupancy factors and methods outlined in NCRP Report 147 and a 0.3 mSv per annum design dose limit. 3. Simply shielding on the basis of a maximum annual accumulated air kerma of 1 mGy at one metre above the floor at any point likely to be occupied by non-occupationally exposed persons. 4. Optimising shielding (for a given occupancy pattern) to limit exposure of the representative individual to less than 1 mSv per annum with a minimum shielding cost. Results and Discussion: Preliminary results indicate that, for diagnostic X-ray facilities, paradigm 3 offers significant advantages in that:x It is easier to apply.x It is independent of the occupancy of the shielded areas.x It is independent of which shielded area is occupied by the representative person, and what proportion of the working day is spent in each of these areas. x There are no problems with high occupancy areas lying behind low occupancy areas.x The cost difference is not significant, especially when construction costs are taken into account.x Compliance with regulatory limits is easier to check. R. A. Chappell, Wen, C-D. and A. Nicolau W P Holman Clinic, Royal Hobart Hospital, Hobart, Tasmania Introduction: Quality assurance (QA) of an intensity modulated radiation therapy (IMRT) treatment includes verification of the successful delivery of the modulated x-ray fields by the accelerator and multi-leaf collimator system. This is done by measuring the two-dimensional dose distribution generated for each modulated x-ray field, historically using film, now more commonly with an array of diodes or ion chambers. The measured distribution can be compared, field by field, with that predicted by the planning system -the 'planar dose map'. Alternatively, the measurements can be converted to fluence maps, and used to calculate volume dose within a computerised tomography (CT) data set representing the patient's anatomy. In either case, we are obliged to pass or fail the QA according to some objective criteria, representing the match between a planned dose distribution and that delivered. A popular tool used in radiation therapy is 'gamma analysis', as described by Low et al 1, 2 . Gamma analysis is based on the 'Van Dyk' criteria, which express our tolerance for error in terms of dose difference in regions of low dose gradient, and distance between isodose surfaces in regions of high dose. Gamma analysis generalises this concept, by scaling the planned and measured distributions to our preferred Van Dyk criteria. Gamma is a dimensionless number, whose value is no greater than unity where the criteria are met.Conclusions: When a beam passes through a large air gap created by a patient positioning device, the dose to the patient's surface and to depths of up to 15 cm below is reduced due to a decrease in scattered radiation. While the dose calculated by AAA is considerably better than that calculated by the PBC algorithm, significant errors can still result when using AAA to calculate the dose beyond a large air gap. References: 1 Radiation Oncology Queensland, Toowoomba, Australia Introduction: In-vivo dosimetry is recommended as a quality assurance tool in radiotherapy. The SunNuclear rf-IVD diode system has been designed for in-vivo dosimetry; and in this instance is applied to the measurement of entrance dose for 6 MV and 10 MV photon beams. AAPM Report 87 (2005) details methodologies and pitfalls in diode in-vivo dosimetry. Methods: SunNuclear QED "gold" n-type diodes were calibrated for measuring entrance dose, with measurements compared to predicted doses. Commissioning of the system included measurements of SSD and field size factor corrections for the diodes. A feature of the system is its ability to temperature-correct the diode response. This feature was also tested. Results: Field size correction factors for the diodes were up to 0.5% for square field sizes from 5 cm to 20 cm. SSD correction factors were up to 2.1% for SSD's down to 80 cm. A heat pack was used to deliberately raise the temperature of the diode up to 10 o C above the calibration temperature. Application of manufacturer-specified temperature correction factors was found to correct the diode reading to within 0.5% of calibration. Following commissioning, the system was used for measurement of entrance dose on selected patients. Results were compared to entrance doses predicted by a simple dose check program used as an independent monitor unit calculator. The recording of both uncorrected and temperature-corrected diode readings allowed a measurement of the patient's skin temperature at the point of measurement. Preliminary patient results indicate agreement between measured and predicted entrance doses to within 4%. Measured skin temperatures were in the expected range (28 to 34 o C) except when the diode was placed on a thermo-plastic cast rather than the patient's skin. Discussion: Temperature compensation reduces systematic error in diode readings which can be up to 6%, increasing confidence in the diode system in detecting set-up and treatment errors in radiotherapy. Experience with the diode detectors suggests re-calibration may be required every few months. The SunNuclear rf-IVD diode system has been successfully applied to the measurement of patients' entrance dose taking into account field size, SSD and temperature dependence of diode response. Introduction: The Bland-Altman statistical test is used widely in the area of medicine as a means of comparing a gold standard method of measurement with a new method Altman, 1986, 1999) . One purpose of this statistical test is to determine whether a new method of measurement can be used to replace an established method. The Bland-Altman test compares the two methods using graphical techniques and simple statistical calculations (Bland and Altman, 1986 ). The analysis is performed by plotting the difference against the mean for the two methods, with the mean difference and two standard deviations of the differences included in the plot. In this work, we use the Bland-Altman statistical test to compare several ionisation chambers in the measurement of depth doses for a low energy x-ray beam. Methods: Depth doses for a 75 kVp x-ray beam were measured in water using four ionisation chambers. A PTW Farmer chamber was considered the 'gold-standard' detector and compared with three parallel plate chambers: Markus, NACP and Roos. Depth doses were measured in a water phantom from the surface to a depth of 10 cm. For the Bland-Altman statistical test, we plot the dose difference as measured by the two detectors i.e. (Farmer chamber dose -parallel plate chamber dose) against the mean dose from the two detectors. On the plot, reference lines are included that correspond to the 95% limits of agreement based on the standard deviation of the dose differences. One should note that the x-axis of the plot corresponds to The portal dosimetry function has proven to be a useful tool for verification of dynamically delivered IMRT fields. All measured results were accurate within 3% and 2 mm. Introduction: Superficial lesions are often treated with electron beams in radiation oncology. The dose delivered by electron beams with a radiation oncology linear accelerator has a complex dependence on the shape of the field, the design of the electron collimator system and the energy of the beam. Most of these effects are normally accounted for via an output factor. This is the ratio of the dose with a field shaped insert to that with a standard insert in the electron applicator of the accelerator, measured at the depth of maximum dose under reference conditions. Measurement of output factors for the irregularly shaped electron fields frequently encountered in clinical practice is a time-consuming task. Several methods have been proposed to calculate output factors, see [1] and references therein, with Monte Carlo techniques considered the most accurate [2] . For small ( 3 cm) low energy ( 9 MeV) or non standard source to surface treatment fields, Monte Carlo techniques provide the only reliable calculations. Methods: The latest version of the BEAMnrcMP Monte Carlo code [3] is being used to commission a model of a Varian 21EX Clinac, based on an existing model [4] . Using the method described in [5] [6] [7] , initial matching of model data with measurements is being performed for an open field (40 × 40 cm 2 ) no applicator configuration of the linac. This will be followed with calculations with standard applicators in place and finally with clinical field shapes in the applicators. Calculations are being performed on a small (~30 GFlop/s) cluster of twelve 2.4 GHz Pentium IV cores using the linux Rocks cluster suite [8], which was specifically commissioned for this work. Investigations are also being made into using Condor [9] to harvest compute cycles on idle machines and the ~200 GFlop/s Sony PlayStation 3 for computations. Results: Initial simulation results show significant variations between measured and simulated open-field applicator-free percentage depth dose curves for 6 MeV and 9 MeV beams, where the simulated beam energies were tuned so the resulting PDD curves match the measured PDD curves at R 50 . Discussion: Commissioning of the model will require adjustment of the model parameters, such as the position and thickness of the scattering foils [5] [6] [7] , to achieve better matching of the simulated linac dosimetry to measurements before proceeding with inclusion of applicators and clinical fields into the model. Introduction: Brachytherapy allows one to deliver a high radiation dose locally to a nearby tumour with rapid dose fall off in the surrounding tissue. In-vivo dosimetry measurements of a brachytherapy treatment are difficult to perform due to the steep dose gradient of the radioactive sources. These measurements may be required to verify the predicted dose from the brachytherapy treatment planning system which assumes full scatter conditions in dose calculations. Previous studies have highlighted the dose uncertainties due to the finite patient size and tissue inhomogeneities 1,2 . The aim of this research was to verify the accuracy of a brachytherapy treatment of a superficial melanoma using in-vivo thermoluminescence dosimetry. The measured and predicted doses are given in Table 1 . It was found that dose received by TLDs within the PTV (planning target volume) were reasonably close to the prescribed 100% with the maximum deviation of 7.5% at surface of the patient and 8.8% inside the surface mould. The doses to the eye and neck region were less than 15% of the prescribed dose. There was good agreement with the predicted doses for the right eye, with a deviation of 0.7%. For the dose in the middle of the neck, there was a much larger discrepancy of 8.8% for the first treatment fraction. The other doses measured on the neck were in reasonable agreement, with a maximum deviation of 4.3%. The deviations between measured and predicted doses are attributed to (1) accuracy of the placement of the TLD, (2) placing TLD chips in the high dose gradient and (3) reduced backscatter conditions for the measurements. Considering the measurement accuracy of TLDs it has been agreed that maximum of 10% tolerance was allowed between the measured and predicted doses. For this particular experiment, all the TLD measured doses came within this value. The large discrepancy in the doses on right lateral neck for the two fractions are attributed to positional error of TLDs at the time of treatment and incorrect estimation of anatomical positions of neck region within PLATO. It was concluded that the brachytherapy treatment was safe and could be continued for the remaining fractions according to the plan.Introduction: This paper will concern the power characterization of 4 different digital signal-processing schemes to extract parameters of heart rate and its variability from a test noisy ECG signal using a Wireless On-Body-Network (OBN). The paper will show that transmitting raw ECG signals over TCP/IP (to be processed by a remote server) is the most inefficient scheme (Scheme 1); implementing embedded digital signal processing algorithms on a sensor node (Scheme 2), parallel processing through multiple sensor nodes (Scheme 3) and implementing a dedicated DSP (dsPIC) processing layer on a sensor node (Scheme 4) improves overall power efficiency important in long-term monitoring systems. The wireless onbody-network sensor node is based on the MULLE developed in collaboration with Lulea Institute of Technology, Sweden. The MULLE is a low power sensor node based on a Renesas m16C microcontroller and implements Bluetooth protocols that allow totally wireless patient monitoring via a mobile phone. Methods: To characterize the power profile for these schemes accurately, the algorithm is firstly standardized. The algorithm consists of a noise removing digital filter through convolution, base line drift removal through differentiation, an autocorrelation function and an adaptive R-R peak detection stage. Low-level (assembly language) programming was used to further standardize the algorithm throughout the schemes. The test ECG signal represents a 4 second epoch measurement at a sample rate of 250Hz, with a resolution of 10 bits requiring approximately 2Kbytes of memory. The execution of the algorithm occurs as soon as samples are converted by the A/D in a single interleaving pass. To aid better visualization, schemes 2,3 and 4 transmits the extracted parameters once every 4 seconds. In practical systems, the 4-second epochs will not be transmitted continuously allowing the MCU to sleep and the Bluetooth module to be in a wait or parked state consuming very low power. The DSP in scheme 4 however is turned off completely during the 4 second A/D conversion by the sensor node and is activated by a SPI interrupt. A 250mAh Lithium button cell battery is used to power the sensor nodes. Current draw measurements are taken in series from the battery supply. The diagrams below show the power profile for Schemes 1,2,3 and 4. The supply potential of the sensor node and DSP is 3.3V. Table 1 , the least efficient digital signal-processing scheme is remote computation as in Scheme 1. Sending raw epoch signals over the BT network and the Internet resulted in the continuous use of the BT radio that uses considerable power. Scheme 2 processes the 4-second ECG signal on-board a sensor node. This results in the radio used only to transmit the extracted parameters to the remote server. Average power consumption is reduced by 53%. Scheme 3 divides the signal-processing task between 2 sensor nodes over Bluetooth. The total execution time is reduced thus requiring less energy (watt second) for MCU computation at the expense of radio use. More complex signal processing task such as FFT, A new computer algorithm has been developed to accelerate the acquisition of magnetic resonance images of blood vessels following the routine venous injection of contrast agent. Contrast-enhanced magnetic resonance angiography (MRA) is becoming the method of choice for a number of diagnoses. However, only relatively low temporal resolution is able to be obtained with the conventional MRA sequences currently available on scanners. This restricts the ability to see subtle changes in flow which can indicate partial blockage of arteries and the ability to get sharp images of the arterial system before contrast agent has distributed into other tissues. Our new algorithm is called GUISE (Generalised Unaliasing Incorporating object Support constraint and sensitivity Encoding). It is a parallel method (i.e., using multiple receiver coils) and has similarities to 2D-SENSE, the most successful parallel MRI method in current use. In preparation for carrying out a study on human volunteers, we have simulated the use of GUISE on a set of data acquired conventionally in a previous study.Methods : The principle of the algorithm is that during the sequence of data acquisitions, only the contrast and blood within the blood vessels change significantly with time. By establishing the approximate region in which the blood vessels lie, the algorithm concentrates attention on information known to come from the vessel region. This allows a series of images with better time resolution to be formed and increases the visibility of the vessels. Furthermore, there is no need to synchronise the injection of the contrast agent with the data acquisition.Results: A single simulated result is shown in Figure 1 . The method has also been successfully applied to a plastic bottle and tube phantom with contrast injected during acquisition. Introduction: Optical Coherence Tomography (OCT) is a real-time non-invasive optical imaging technique that can be used to produce high resolution images of biological tissues¹. Over the past decade various studies have considered the use of allfiber systems as this presents several advantages in terms of compactness, flexibility and stability². However, this type of system introduces significant chromatic dispersion, which in turn broadens the point-spread-function (PSF) and degrades the axial resolution. In this paper, we demonstrate experimentally that two fiber stretchers made of different fiber types can be used to compensate for dispersion variations resulting from fiber inhomogeneities and from a fiber length mismatch. Methods: Our OCT system is based on a Mach-Zender configuration in which the light from a 110nm bandwidth 840nm SLED source is split and recombined with two single-mode broadband fused fiber couplers. The sample arm incorporates a third identical coupler to direct the light towards the sample and collect the reflected signal. At the output of the interferometer, the OCT signal is detected by two photodiodes in a balanced detection configuration. The variable optical delay line required to scan through the depth of the sample is obtained by stretching part of the fiber with a piezo-electric actuator. Additionally, each arm of the interferometer incorporates another mechanical fiber stretcher. These stretchers are made by wrapping some fiber around a cylindrical piece of rubber in between two metallic plates tightened together by a screw. Initially, our setup was entirely made up of FiberCore SM800 fiber, which has a single-mode cut-off wavelength of about 730nm and is compatible with our light source. Results: From the measured spectrum of our SLED source, we had estimated a theoretical axial resolution of our OCT system to be about 5.4μm. However, the experiment revealed a much wider PSF covering more than 400μm as shown in Fig. 1 (a). This is due to a large chromatic dispersion variation that can be attributed to longitudinal fiber inhomogeneities. In order to compensate for this unexpected dispersion imbalance, we replaced some of the fiber in the reference arm of our OCT system with a slightly different fiber (Nufern S630). We selected this fiber as it is single mode at our wavelength, but has slightly different waveguide dispersion when compared to the SM800. After substituting about 7.7m of the original fiber with this new one we were able to decrease the PSF width close to the theoretical limit. Fig. 1(b) illustrates the result. The observed asymmetry is most likely due to some residual third-order and polarisation mode dispersion, and the non-gaussian source spectrum. Clearly, to make our all-fiber OCT system flexible enough and easy to build, we need to avoid having to precisely cut fiber lengths to sub-mm accuracy. To solve this, a fiber-based variable dispersion compensator was built by simply wrapping the new S630 fiber around a second stretcher. We have tested our approach in the following way. We started by adjusting very carefully the length of the S630 fiber to perfectly balance the dispersion. The resulting PSF is shown in Fig. 2(a) . Subsequently, we have moved the sample mirror away by 1mm, compensating the extra optical delay by stretching the reference arm only, thereby introducing a dispersion imbalance of an equivalent-air-path of 1.4mm of fiber between the two arms. The resulting PSF, shown in Fig. 2(b) , appears broadened by about 20%. Finally, we have compensated this dispersion imbalance by adjusting the two stretchers simultaneously while leaving the sample mirror fixed. As shown in Fig. 2(c) , the PSF can be recompressed back to its original width. Subject Selection: Subjects were recruited to this study via several means:(1) This department is the WA state centre for visual electrophysiology, and as such interacts with a high proportion of IRD affected families through patient referral. (2) Other members of families identified by the process described in (1) above were recruited.(3) Unsolicited requests were received from subjects wishing to take part in this study. (4) Unaffected subjects were recruited to act as controls. DNA Collection: DNA was collected from 671 individuals in the form of blood (540), saliva (43) or Buccal swabs (88), after obtaining informed written consent. The DNA is stored in two separate locations in the Western Australian DNA Bank (WADB). Structure of the Register: The IRD Register consists of (1) a Microsoft Access 97 database. Information stored in this database includes demographic information, clinical indicators, family history, detailed psychophysical and electrophysiological test results and genetic analysis results where available; (2) A Cyrillic Database. Family information recorded here includes the family tree, family number, family name, and the IRD affecting the family; (3) Hard copy records. Each subject has a hardcopy medical record file. Results and Discussion: Details of 1578 subjects are recorded in the IRD register, sourced from 850 families. DNA has been obtained from 671 of these subjects, sourced from 232 families. Table 1 presents the diagnostic grouping of IRD sufferers recorded in this register. From this table it is estimated that the prevalence of Inherited Retinal Disease in Western Australia is one in 2100, and the prevalence of Retinitis Pigmentosa in particular is one in 4500.