Tweaking Com Pro Key

0 views
Skip to first unread message

Susanne Sima

unread,
Jul 4, 2024, 1:57:33 PM (19 hours ago) Jul 4
to inadmisra

Hardware tweaking is a process of modifying certain parts of a hardware such as changing the cables, cleaning the heads of a VCR with a branded cleaning fluid or oiling the moving parts of an engine with the best possible oil.

Computer hardware tweaking is an extension of hardware tweaking, specifically geared towards the components of a PC. They include: changing voltage and clock rates of processing units, modifying RAM unit timing, improving cooling systems to reduce chance of overheating, etc.

Software tweaking is the process of improving the performance of an application or the quality of its output. There can be two ways of accomplishing this: manually (that is, if one is familiar with programming; though it may be irrelevant if the source of the application is closed, and there are no built-in means to adjust its performance) or using another piece of software specialized for that purpose. Tweaking of this kind generally increases usability, in terms of personal configuration preferences, rather than objective performance of the system overall).

Some very precise applications need constant and thorough tweaking to stay up to date and deliver best possible results. One of the most obvious examples of such a fine tuning is LAME MP3 encoder, whose 3.9x branch is not only considered as the state-of-the-art MP3 encoder,[1] but also continues to shape the boundaries of the MP3 codec and stay competitive with its successors.[2]

The experience of a tweaking phase precedes a point in which many users seek and enter addiction treatment programs. Due to psychosis, they may begin sobriety in an inpatient mental health facility before being transferred to medical detox or inpatient rehab.

Time series classification has received great attention over the past decade with a wide range of methods focusing on predictive performance by exploiting various types of temporal features. Nonetheless, little emphasis has been placed on interpretability and explainability. In this paper, we formulate the novel problem of explainable time series tweaking, where, given a time series and an opaque classifier that provides a particular classification decision for the time series, we want to find the changes to be performed to the given time series so that the classifier changes its decision to another class. We show that the problem is \(\mathbf NP\)-hard, and focus on three instantiations of the problem using global and local transformations. In the former case, we investigate the k-nearest neighbor classifier and provide an algorithmic solution to the global time series tweaking problem. In the latter case, we investigate the random shapelet forest classifier and focus on two instantiations of the local time series tweaking problem, which we refer to as reversible and irreversible time series tweaking, and propose two algorithmic solutions for the two problems along with simple optimizations. An extensive experimental evaluation on a variety of real datasets demonstrates the usefulness and effectiveness of our problem formulation and solutions.

Example I: Abnormal versus normal heartbeats. Consider an electrocardiogram (ECG) recording, such as the one shown in Fig. 1. The original signal (blue curve), denoted as \(\mathcal T\), corresponds to a patient suffering from a potential myocardial infarction. An explainable time series tweaking algorithm would suggest a transformation of the original time series to \(\mathcal T\,^\prime \) (yellow curve), such that the classifier considers it normal. In the figure, we are showing a series of local transformations that would change the prediction of the opaque classifier from one class to the other.

Example II: Gun-draw versus finger-point. Consider the problem of distinguishing between two motion trajectories, one corresponding to a gun-draw and the other to a finger-point. In Fig. 2, we can see the trajectory of a regular finger-pointing motion (blue time series), denoted as \(\mathcal T\). The objective of explainable time series tweaking would be to suggest a transformation of \(\mathcal T\) to \(\mathcal T\,^\prime \) (yellow curve), such that the classifier considers it a gun-point motion instead. Suppose we have an actor making a motion with her hand. The objective is to be able to distinguish whether that motion corresponds to a gun-draw or to pointing. In Fig. 2, we can see the trajectory of a regular finger-pointing motion (blue time series).

Complementary to interpretability, a number of studies have focused on actionable knowledge extraction[35, 37], where the focus is placed on identifying a transparent series of input feature changes intended to transform particular model predictions to a desired output with low cost. Many actionability studies exist with a business and marketing orientation, investigating actions necessary to alter customer behavior for mostly tree-based models [19, 41]. In addition, several studies place particular focus on actionability which can be performed in an efficient and optimal manner [12, 36]. For example, Cui et al. specified an algorithm to extract a knowledgeable action plan for additive tree ensemble models under a specified minimum cost for a given example [9]. Similarly, an actionability study by Tolomei et al. investigated actionable feature tweaking in regards to converting true negative instances into true positives; employing an algorithm which alters feature values of an example to the point that a global tree ensemble prediction is switched under particular global cost tolerance conditions [35].

In this section, we instantiate the problem of explainable time series tweaking as either global or local, and provide three algorithms for solving the problem. In the former case, we provide a solution for the k-nearest neighbor classifier (Sect. 4.1) and in the latter case we introduce two solutions for the random shapelet forest [20] algorithm (Sect. 4.2).

We define the problem of global explainable time series tweaking for the k-nearest neighbor classifierFootnote 2 and present a simple solution to tackle this problem. Eventually, we show that our algorithm for finding a transformation \(\mathcal T\,^\prime \) for the k-nearest neighbor classifier is a generalization of the 1-nearest neighbor approach presented by Karlsson et al. [21].

The first step is to define a transformation function \(\tau (\cdot )\) for global explainable time series tweaking. Given a desired number of nearest neighbors k, a training set of time series \(\mathcal X\) with corresponding class labels \(\mathcal Y\), and a target time series \(\mathcal T\), we define the transformation function \(\tau _ NN \) with the goal of suggesting a transformation of \(\mathcal T\), such that the transformation cost is minimized and the classifier changes its decision to the desired class label. In this case, the smallest cost corresponds to the transformation that imposes the lowest Euclidean distance between the original and transformed time series.

Overall, the computational complexity of the global tweaking algorithm is O(nmC), where n is the number of time series in the training set, m the number of time points and C the number of cluster centroids. Also note that Algorithm 1 can be extended to support multivariate time series transformation by defining a multivariate distance measure, e.g., the dimension-wise sum of Euclidean distances.

In this section, we define local explainable time series tweaking for the random shapelet algorithm and describe the shapelet transformation function, which is the primary building block of our solution. Next, we describe two algorithms to tackle the problem and present simple optimization strategies for both algorithms. Finally, we prove that the problem we study is \(\mathbf NP\)-hard, when considering forests of shapelet trees.

The final step is to define a suitable transformation function \(\tau (\cdot )\) for explainable time series tweaking. Given a time series example \(\mathcal T\) and an RSF classifier \(\mathcal R\), we define the transformation function \(\tau (\cdot )\) used at each conversion step while traversing a decision path in each tree of the ensemble. Recall that our goal is to suggest the transformation of \(\mathcal T\), such that the transformation cost is minimized and the classifier changes its classification decision. Again, remember that the smallest cost corresponds to the transformation that imposes the lowest Euclidean distance between the original and transformed time series.

Note that reversible time series tweaking is a more general version of Problem 3 as it allows any change applied to the time series to be overridden by a later change, while irreversible time series tweaking locks the time series segments that have already been changed, hence not allowing for any change to be reversed. By restricting overriding transformations in Problem 3, the Euclidean distance between the current and transformed time series is guaranteed to be monotonically increasing as more transformations are applied. Since this monotonicity property is guaranteed, transformations can be abandoned early if the cumulative cost of a transformation is above the currently best successful transformation so far. In contrast, reversible time series tweaking does not guarantee that the Euclidean cost is monotonically increasing and, as a consequence, does not allow for early abandoning of the transformation. Despite this, we will show in Sect. 4.2.2 that a simple optimization can achieve substantial speedups for the reversible time series tweaking algorithm.

The computational complexity of the greedy local tweaking algorithm is, assuming a given random shapelet forest, \(O(\mathcal R n \log (n) m^2)\), where n is the number of examples and m the number of time points. More concretely, in the worst case where each leaf consists of one example, the number of paths in a forest is \(\mathcal Rn\). Moreover, since each path has \(\log n\) conditions and we need to compute the minimum distance between time series of size m, we have for each path a cost of \(m\log n\). Finally, we have the additional cost of ensemble prediction, which for an ensemble of size \(\mathcal R\), is \(O(\mathcal Rm \log n)\) for each n paths.

aa06259810
Reply all
Reply to author
Forward
0 new messages