Spire is a software polyphonic synthesizer that combines powerful sound engine modulation and flexible architecture, a graphical interface provides unparalleled usability. Spire is the embodiment of the best opportunities, both software and hardware synthesizers.
Many thanks to regular contributors Shannon McDowell, Medway Studios, ThaLoops, Richard Hasiba, blortblort, Triple Spiral Audio , Wolfgang Gaube, Sounds2Inspire, Tonedeff, ModeAudio, Jonathan Litten, George Napier, Biswadeep Ray, Yonatan, Reynn, IzOhm, and HLplanet for supporting Rekkerd!
Inspired by artists like Martin Garrix, Hardwell, Dannic, Sander van Doorn, R3hab and others, all presets had been carefully designed to fit optimally in the mix. And as always, all presets coming with full Modwheel and Velocity allocation for a maximum on flexibility.
The usage of our products (in particular samples, loops and patches) for the creation of competitive products like a sound library or as a sound library for any kind of synthesizer, virtual instrument, sample-based product, sample library or any other musical instrument is strictly prohibited.
Das Kopieren, Vervielfltigen, Verleihen, Weiterverkaufen oder Handeln dieses Produkts oder des gesamten Inhalts oder eines Teils davon ist strengstens untersagt. Nur der Erstkufer dieses Produkts hat das Recht, den beiliegenden Inhalt in seiner kommerziellen und nichtkommerziellen Musik- / Multimedia-Produktion lizenzfrei zu nutzen.
Die nicht ausschlieliche Lizenz wird weltweit nur fr einen einzelnen Benutzer fr die gesamte Schutzdauer des Urheberrechts gewhrt und ist nicht bertragbar. Sie drfen den Inhalt nicht elektronisch bertragen oder in ein Zeit- / Filesharing- oder ffentliches Computernetzwerk stellen. Alle Rechte des Produzenten und des Inhabers des Werkes bleiben vorbehalten. Die nicht autorisierte Vervielfltigung dieses Downloads verstt gegen geltendes Recht. Das Produkt darf nur ganz oder teilweise elektronisch auf das Gert oder Medium des ursprnglichen Kunden bertragen (heruntergeladen, kopiert oder gesichert) werden.
Aufgrund der weltweiten Verwendung unserer Produkte ist Resonance Sound nicht fr Konflikte zwischen Musiklabels verantwortlich, die sich auf die Verwendung derselben Sounds / Samples oder Loops beziehen.
Verwendung unserer Produkte (insbesondere Samples, Loops und Patches) zur Erstellung von Konkurrenzprodukten wie einer Soundbibliothek oder als Soundbibliothek fr jede Art von Synthesizer, virtuellem Instrument, sample-basiertem Produkt, Sample-Bibliothek oder anderem Musikinstrument ist streng verboten.
This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in other forums, provided the original authors and source are credited and subject to any copyright notices concerning any third-party graphics etc.
Another line of studies employed non-experimental settings to elucidate the neural basis of unrestrained hand and arm movements in monkeys (Evarts, 1965; Mavoori et al., 2005; Aflalo and Graziano, 2006; Jackson et al., 2006, 2007) and spontaneous, uninstructed language in humans (Towle et al., 2008). Investigations in experimentally unrestricted conditions allow capturing the complexity and functional diversity of real-life behavior more extensively than by standard laboratory procedures (Gibson, 1950) and may prevent a possible contamination of findings caused by the experimental environment as such (Bartlett, 1995), e.g., by the influence of emotional reaction of subjects to the experimenter (Ray, 2002).
Previous studies have also used conditions approximated to real life to study the densely interwoven perception and production processes underlying social interaction in humans. For instance, social interaction has been studied using fMRI experiments in virtual-reality social encounters between subjects and virtual characters (Wilms et al., 2010; Ethofer et al., 2011; Pfeiffer et al., 2011). Also, techniques have been developed to simultaneously record the brain activities of two or more interacting individuals with the help of EEG (Babiloni et al., 2006), fMRI (Montague et al., 2002), and MEG (Baess et al., 2012). In this way, various kinds of interactive behaviors can be investigated, e.g., in spontaneous communication of subjects while playing games (Montague et al., 2002; Babiloni et al., 2006), imitating others' movements (Dumas et al., 2010), or collectively making music (Lindenberger et al., 2009).
Following this trend towards increasingly naturalistic approaches, it would be highly interesting to study brain activity underlying real-life human social interaction outside experiments. This may enable investigators to not only rule out the unwanted effects induced by experimental settings, but, even more so, to investigate the specific kinds of social interaction situations that cannot, or only with great difficulty, be studied experimentally.
Such investigations of the neural basis of social interaction in non-experimental, real-life environments are, however, currently lacking (Hari and Kujala, 2009). Major reasons for the absence of such studies are methodological limitations of most recording techniques in humans: traditional imaging methods [e.g., positron emission tomography (PET) or functional magnetic resonance imaging (fMRI)] require a stationary apparatus, with the subjects placed in a fixed position, and therefore these techniques cannot be employed in measurements of dynamic, unrestricted real-life behavior. Non-invasive electroencephalography (EEG) is also not well suited for this purpose due to its limited spatial resolution and its high susceptibility to artifacts, such as those induced by speaking or other movements (Figure 1).
Example of artifacts related to head movement in simultaneous non-invasive, scalp-recorded EEG (upper 4 traces) and ECoG recorded using subdurally implanted electrodes (lower 6 traces). The height of the black scale bar in the lower right corner of the plot corresponds to 100 μV.
In the present study, we employed, for the first time, human electrocorticography (ECoG) to study neural processes related to real-life social interaction. Owing to the combination of superior temporal resolution and much higher resistance to artifacts compared with non-invasive recordings (see Figure 1 and Ball et al., 2009a), ECoG proved a valuable technique for investigating human motor (Crone et al., 1998a,b) and language (Crone et al., 2001a,b; Sinai et al., 2005) functions, and became a promising candidate signal for clinical brain-machine interface (BMI) applications (Leuthardt et al., 2006; Pistohl et al., 2008, 2012; Ball et al., 2009b), including approaches for restoration of speech production (Blakely et al., 2008; Leuthardt et al., 2011; Pei et al., 2011). In the present study, we performed post hoc analyses of ECoG data continuously recorded for pre-neurosurgical diagnostics over several days or weeks during the daily hospital life of epilepsy patients. Throughout the analyzed time periods, patients were conscious, fully alert, and exhibited a wide spectrum of social behaviors, including active interaction with clinical personnel, family, friends, and other patients.
Previous research on social interaction in the fields of linguistics, social psychology, and health care has extensively studied communication between doctors and patients (Roter and Hall, 1989; Ong et al., 1995; Ha and Longnecker, 2010; Nowak, 2011). By contrast, interaction between intimate partners has been within the focus of psychosociological and linguistic research (Sillars and Scott, 1983; Gottman and Notarius, 2000; Pennebaker et al., 2003). Here, we aimed to elucidate, for the first time, the differential neural processes underlying these interactive situations in real-life communication. To do so, we compared conversations during which patients were either talking to their life partners (Condition 1, C1) or to their attending physicians (Condition 2, C2). The two conditions can be assumed to differ in various aspects of social interaction. For instance, patients are more intimate and emotionally attached to their life partners, and share more life experiences with them than with their physicians. Conversely, conversations with physicians are typically more emotionally contained and based on factual communication (Good and Good, 1982).
To estimate the potential usefulness of neural differences during communication with different dialog partners for BMI applications, we also performed a single-trial classification analysis. BMI-based restoration of expressive speech is a topic of growing interest (Pei et al., 2012). So far, BMI studies mainly aimed at decoding such communication-relevant aspects as phonemes (Blakely et al., 2008; Guenther et al., 2009; Brumberg et al., 2011; Pei et al., 2011), words (Kellis et al., 2010), and semantic entities (Wang et al., 2011). Complementary to these approaches, our study makes a first step toward decoding of such high-level information as the identity of the speaker which may help accurate shaping of the language output.
Three patients in pre-neurosurgical diagnostics of medically-intractable epilepsy using ECoG were included in this study upon their written informed consent. The study was approved by the Ethics Committee of the University Medical Center Freiburg. Two patients (S1, S3) were right-handed and one (S2) was ambidextrous, all had normal hearing and no history of affective disorders (for more details, see Table 1). Electrode sites analyzed in the present study were outside the seizure onset zone as determined by medical diagnostics. Cortical seizure onset zones in S1 and S2 were in the right posterior superior temporal gyrus and in left parietal areas, respectively, as depicted in Figure 2. In S3, the seizure onset zone was in the left hippocampus and was therefore not visible on the cortical surface.
All subjects had subdurally implanted platinum or stainless-steel electrodes (Ad-Tech, Racine, Wisconsin, USA) 4 mm in diameter, covered in sheets of silicone and arranged in regular grids and stripes with a 10-mm center-to-center inter-electrode distance. ECoG was recorded using a clinical EEG-System (ITMed, Germany) at a sampling rate of 1024 Hz, a high-pass filter with a cutoff frequency of 0.032 Hz, and a low-pass anti-aliasing filter at 379 Hz. Digital video recordings (25 Hz frame rate) synchronized to ECoG were acquired for all subjects.
7fc3f7cf58