As the vGate machine has been built using the Virtual Box software, you will have to install this software on your host machine first. And since the version of Virtual Box used to build vGate was the release 6.0.18, you have to install at least this version to be able to run the virtual machine.
After the import, select the imported virtual machine and in the toolbar click the Settings button. Review the virtual machine settings to make sure that the virtual machine has the hardware it needs to operate. You can adjust the number of CPUs and the RAM you want to give to the VM.
Everything is already configured in the virtual machine to be able to launch GATE without any difficulty. If you want to know how the machine has been configured, you can find all information inside the virtual machine.
Once you start the virtual machine, you can launch the web browser Firefox. Firefox is directly showing the documentation pages (in HTML) that are inside the virtual machine. So please refer to this documentation.
In case of a connection on a network including machines that you own, you can establish a NFS (Network File System) to be able to mount an existing filesystem of another machine in your virtual machine. Again you can read documentation on that by searching for NFS (documentation for Ubuntu).
To do that, the trick is to do as if you wanted to add a new physical hard drive disk (HDD) to your computer. Every step will be the same except that instead of adding a real HDD, we will add a virtual HDD.
Also if you want your new disk to be automatically mounted each time you reboot your machine, you have to add an entry in the file /etc/fstab. Be careful as this file is very sensitive to mistakes, your system can be hard to repair if you modify existing lines or introduce mistakes in it!
Different object oriented languages offer slightly different syntax for implemented such interfaces. In C++ the blueprint, that contains the definitions of the common abilities as pure virtual method definitions, is usually an abstract base class. The concrete types are derived then from this common base and implements the pure virtual methods of the base class. A method is declared to be pure virtual if it do not contain any implementation that achieved by the = 0; syntax. A class is called abstract class if it has at least one pure virtual method. Note, that it is not possible to instantiate objects from an abstract class simply because they have at least one unimplemented method (the pure virtual).
As an example, the following shows a possible implementation of the above 2D shape area computation interface. While the whole abstract VShape2D base class is shown, please note the pure virtual VShape2D::Area() method declaration. The complete working example is available under applications/preliminaries/cpp-interface.
And a possible implementation of the concrete Square class that implements the area computation interface for the concrete square shape type. Technically it means that the Square class is derived from the VShape2D abstract base and implements its pure virtual interface method.
Also note, that the VShape2D base class has an other virtual method, the VShape2D::Perimeter() that actually has an implementation in the base class so this method is not pure virtual. Since this method already has an implementation in the base class, the derived classes might optionally provide their own implementation of this method or not at all. The default implementation, i.e. the one in the base class will be used in the later case.
The above dynamic or run-time polymorphism, i.e. the run-time resolution of function calls, is achieved in C++ through the combination of inheritance and virtual methods.From the computing performance point of view, in some cases it might be beneficial to make this resolution at compile time. Static or compile time polymorphism can be achieved by the template metaprogramming based Curiously Recurring Template Pattern (CRTP) C++ construct.
It will be assumed in the following that the created subdirectory, with all the uncompressed Geant4 source code, is G4SRC. This means G4SRC = full/path/to/geant4-v11.0.1 in the above example (please note, that you need to replace /full/path/to with your actual path to the uncompressed source directory), that can be set as an environment variable as:
Note, that (some of) the location related variables are the same as above when the Geant4 toolkit was built and installed from source.Therefore, we can follow exactly the same steps (and commands but now on the VM) to configure, build and execute the /examples/basic/B1 example application. The only difference is,that now we (the local1 user) has nor right to modify the system. We can overcome this by simple copying the example to somewhere our user area.We will use the G4WORKDIR=/home/local1/geant4/work directory throughout this course that first we make sure that it exists, then copy the /examples/basic/B1 example application codes:
We describe the development of an environment for Geant4 consisting of an application and data that provide users with a more efficient way to access Geant4 applications without having to download and build the software locally. The environment is platform neutral and offers the users near-real time performance. In addition, the environment consists of data and Geant4 libraries built using low-level virtual machine (LLVM) tools which can produce bitcode that can be embedded in HTML and accessed via a browser. The bitcode is downloaded to the local machine via the browser and can then be configured by the user. This approach provides a way of minimising the risk of leaking potentially sensitive data used to construct the Geant4 model and application in the medical domain for treatment planning. We describe several applications that have used this approach and compare their performance with that of native applications. We also describe potential user communities that could benefit from this approach.
CloudMC has been developed over Microsoft Azure cloud. It is based on a map/reduce implementation for Monte Carlo calculations distribution over a dynamic cluster of virtual machines in order to reduce calculation time. CloudMC has been updated with new methods to read and process the information related to radiotherapy treatment verification: CT image set, treatment plan, structures and dose distribution files in DICOM format. Some tests have been designed in order to determine, for the different tasks, the most suitable type of virtual machines from those available in Azure. Finally, the performance of Monte Carlo verification in CloudMC is studied through three real cases that involve different treatment techniques, linac models and Monte Carlo codes.
A more economically efficient approach is the use of the Cloud, which essentially consists of a set of computing resources offered through internet as a pay-per-usage service [16]. In a Cloud Computing environment it is easy to create a virtual cluster with the capability of distributing any tasks onto the multiple computing nodes, which makes parallel computation available. Using such an approach, there is no need for initial investment since the facilities are already built and their maintenance is assumed by the owning companies. Instead, the whole outlay is about the costs of the resources actually used. Furthermore, applications can be scalable, so their computational resources can change at runtime to match the real needs, while the capacity of a conventional cluster is fixed, so the efficiency might not be optimal [17]. The likelihood of future implementation of the Cloud Computing paradigm in the routine of clinical radiation therapy has been highlighted [18].
In a previous work [19], we presented CloudMC, a cloud-based platform developed over Microsoft Azure cloud. It was originally intended to provide computational power to run MC simulations in a short time. This is accomplished through the distribution of the calculations over a dynamic cluster of virtual machines (VMs) that are provisioned on demand and removed automatically once the simulation is finished.
Some tests were conducted in order to determine the most suitable type and size for the set of Worker Roles that run the MC simulations in CloudMC and for the role responsible for the reducing tasks, the so called Reducer Role in this paper. For performance benchmarking of the different types and sizes of Worker Roles, a PenEasy [7] execution corresponding to a 3105 histories MC simulation of an iodine radioactive seed in a COMS ophthalmic applicator [27] has been run on a single machine of different type/size each time. The tally files resulting from the PenEasy simulations contain the information of the spent CPU time, which will be used to evaluate the efficiency of the different VM types in executing this task.
VM types with high RAM have a similar performance for the reduce tasks. In order to choose one type as default, other features, like the disk capacity and the cost, need to be taken into account. For example, E-series machines have a good performance, but they have less disk capacity, which may not be enough for some simulations involving very large PHSPs. According to all this, G1 has been chosen as the preferred VM for the Reducer Role.
Amazon cloud servers run virtual machine "instances". Each instance loads its operating system from an Amazon Machine Image (AMI). We have created a publicly available AMI with the current version of the open source Monte Carlo library Geant4 (9.4.p01) and based on the official Ubuntu 10.04 LTS AMI from Canonical. This instance-store AMI is publicly available in the US-East region with ID ami-50d62b39 for anyone to try out.
df19127ead