CCP4mg is a molecular-graphics program that is designed to give rapid access to both straightforward and complex static and dynamic representations of macromolecular structures. It has recently been updated with a new interface that provides more sophisticated atom-selection options and a wizard to facilitate the generation of complex scenes. These scenes may contain a mixture of coordinate-derived and abstract graphical objects, including text objects, arbitrary vectors, geometric objects and imported images, which can enhance a picture and eliminate the need for subsequent editing. Scene descriptions can be saved to file and transferred to other molecules. Here, the substantially enhanced version 2 of the program, with a new underlying GUI toolkit, is described. A built-in rendering module produces publication-quality images.
If you are looking for a user-friendly software for two dimensional analysis of framed structures, you might want to try GRASP, which stands for Graphical Rapid Analysis of Structures Program. GRASP is a software developed by ACECOMS, School of Civil Engineering, Asian Institute of Technology. It is designed to provide an interactive, easy to use, graphical environment for modeling and analysis of beams, trusses and rigid frames. GRASP supports SI, US and metric units and allows the use of mixed units. It also has a Structure Wizard that provides a step-by-step guideline for the generation of multistory structural models.
The best part is that GRASP is available as a free demo version that you can download from the ACECOMS website. The demo version has some limitations, such as the maximum number of nodes and members, but it still allows you to perform basic analysis of simple structures. You can also download a book titled "Understanding 2D Structural Analysis" that explains the theory and application of GRASP. The book covers topics such as structural modeling, load cases, load factors, analysis methods, result interpretation, and verification examples.
It is challenging to conduct and quickly disseminate findings from in-depth qualitative analyses, which can impede timely implementation of interventions because of its time-consuming methods. To better understand tradeoffs between the need for actionable results and scientific rigor, we present our method for conducting a framework-guided rapid analysis (RA) and a comparison of these findings to an in-depth analysis of interview transcripts.
Set within the context of an evaluation of a successful academic detailing (AD) program for opioid prescribing in the Veterans Health Administration, we developed interview guides informed by the Consolidated Framework for Implementation Research (CFIR) and interviewed 10 academic detailers (clinical pharmacists) and 20 primary care providers to elicit detail about successful features of the program. For the RA, verbatim transcripts were summarized using a structured template (based on CFIR); summaries were subsequently consolidated into matrices by participant type to identify aspects of the program that worked well and ways to facilitate implementation elsewhere. For comparison purposes, we later conducted an in-depth analysis of the transcripts. We described our RA approach and qualitatively compared the RA and deductive in-depth analysis with respect to consistency of themes and resource intensity.
Timeline for conducting rapid and in-depth analysis. Some transcript coding took place as part of CFIR codebook development (i.e., the first 93 days). CFIR Consolidated Framework for Implementation Research
The goals of this paper were to describe our approach to conducting a CFIR-informed RA, assess the consistency of findings from our RA in comparison to an in-depth analysis of the same data, and compare resource intensity of the two analytic approaches. Overall, we found RA to be sufficient for providing our operations partner with actionable findings and recommendations, which was necessary given the relatively short timeline included in the policy mandate for implementation of AD programs throughout the VA.
With respect to consistency of our RA and in-depth analysis findings [14], themes from the RA were well-aligned with the CFIR domains and constructs from the in-depth analysis. Considering the CFIR was embedded throughout the evaluation, including the design of interview guides and indirectly in development of the summary tables, these findings are not entirely unexpected. Upon further reflection, we could have elected to more explicitly incorporate the CFIR constructs into the RA summary tables rather than indirectly through the interview guides, and this may have made RA even faster. This would still be considered a rapid analytic approach, but would have carried the CFIR more transparently throughout the RA portion of the project. Depending on the anticipated uses of similar evaluation data, this may further streamline the method.
Given the complexity of the CFIR (i.e., multiple constructs per domain), rapid analytic methods like ours may be helpful when working with large numbers of interviews where line-by-line coding and analysis may not be possible, and/or when evaluating highly complex interventions where one needs to quickly identify key aspects of implementation. However, careful consideration should be taken prior to adopting this approach to limit the potential for bias and to limit the potential for providing an overly narrow interpretation of the data. It is important to keep in mind that the combination of the strength and frequency of qualitative comments is what helps us understand their relative importance and contributions to our research [20], regardless of whether you are using a rapid or in-depth analytic approach.
Staffing, funding, and other resource constraints make it challenging to rapidly complete and generate valid findings from research and evaluation projects. Delays can impede implementation of innovative programs or interventions when data are needed to monitor, modify, or scale-up, or when policy changes necessitate the need for timely feedback. Our team was charged with providing rapid feedback to implementers of a successful AD program in one VA regional network for dissemination across the VA. To accomplish this, we successfully applied the use of a rapid analytic method.
Achieving balance between the need for actionable results and scientific rigor is challenging. The use of rapid analytic methods for the analysis of data from a process evaluation of a successful AD program proved to be adequate for providing our operations partner with actionable suggestions in a relatively short timeframe.
In everyday life, our visual surroundings are not arranged randomly but structured in predictable ways. Although previous studies have shown that the visual system is sensitive to such structural regularities, it remains unclear whether the presence of an intact structure in a scene also facilitates the cortical analysis of the scene's categorical content. To address this question, we conducted an EEG experiment during which participants viewed natural scene images that were either "intact" (with their quadrants arranged in typical positions) or "jumbled" (with their quadrants arranged into atypical positions). We then used multivariate pattern analysis to decode the scenes' category from the EEG signals (e.g., whether the participant had seen a church or a supermarket). The category of intact scenes could be decoded rapidly within the first 100 ms of visual processing. Critically, within 200 ms of processing, category decoding was more pronounced for the intact scenes compared with the jumbled scenes, suggesting that the presence of real-world structure facilitates the extraction of scene category information. No such effect was found when the scenes were presented upside down, indicating that the facilitation of neural category information is indeed linked to a scene's adherence to typical real-world structure rather than to differences in visual features between intact and jumbled scenes. Our results demonstrate that early stages of categorical analysis in the visual system exhibit tuning to the structure of the world that may facilitate the rapid extraction of behaviorally relevant information from rich natural environments.NEW & NOTEWORTHY Natural scenes are structured, with different types of information appearing in predictable locations. Here, we use EEG decoding to show that the visual brain uses this structure to efficiently analyze scene content. During early visual processing, the category of a scene (e.g., a church vs. a supermarket) could be more accurately decoded from EEG signals when the scene adhered to its typical spatial structure compared with when it did not.
We have presented GRAPE, a software resource with specialized data structures, algorithms, and fast parallel implementations of graph-processing methods coupled with efficient implementations of algorithms for RW-based GRL. Our experiments have shown that GRAPE significantly outperforms state-of-the-art graph-processing libraries in terms of empirical space and time complexity, with an improvement of up to several orders of magnitude for common RW-based analysis tasks. This allows substantially bigger graphs to be analyzed and may improve the performance of graph machine learning methods by allowing for more comprehensive training, as shown by our experiments performed on three real-world large graphs. In addition, the substantial reduction of the computational time achieved by GRAPE in common graph-processing and learning tasks will help to reduce the carbon footprint of machine learning researchers and graph-processing and analyzing practitioners in several disciplines.
Single-molecule binding assays allow the interrogation of individual macromolecules from a biological process using purified components or cellular extracts. In contrast to ensemble measurements, single-molecule assays can report the order and kinetics of individual molecular interactions1,2,3,4,5,6. The introduction of commercial microscopes designed for single-molecule imaging spurred wide adoption of this technology. However, the absence of easy-to-use software with automated pipelines for extracting kinetic data from an image series makes data analysis slow and tedious. Many key steps for obtaining accurate kinetic parameters from co-localization single-molecule spectroscopy (CoSMoS) images still require manual user intervention and the selection of parameters guided by user experience7,8,9. User-dependent parameter choice and manual inspection of images dramatically limits throughput. For example, after spots are detected via user-defined intensity and bandpass-filter thresholds, the user must still inspect the images to remove overlapping spots and false-positive events. Finally, no standard procedure exists to systematically assess the quality of the analysis. To overcome these hurdles, we constructed a pipeline for rapid processing of CoSMoS images while quantitatively assessing experimental data quality. The process automates experimental calibration and high-confidence spot detection and localization using just minutes of computational time. CoSMoS data processing is controlled through a single graphical user-interface, and the modular interface allows individual functional modules to be adjusted for a wide variety of experiments. The pipeline improves detection of co-localization experiments, data analysis speed, and experimental reproducibility.
aa06259810