Linear Networks And Systems

1 view
Skip to first unread message

Terpsícore Deckelman

unread,
Aug 5, 2024, 1:53:07 PM8/5/24
to dustjabbobswild
AssetWiseLinear Network Management provides linear network management, linear referencing services, and decision support capabilities to help manage complex transportation networks. This single source of truth for asset information across your road network enables you to make more informed decisions, improve safety, reduce maintenance costs, and ensure regulatory compliance.

Create and maintain an accurate network model to reflect the ongoing changes that occur during construction, improvement, and maintenance, as well as visually display associated asset information for enhanced decision-making.


Optimize project, construction, and asset management with 2D, 3D, and 4D data analysis using ProjectWise, SYNCHRO, and AssetWise tools. From conception to management, streamline infrastructure development with digital twins for better outcomes.


ODOT used AssetWise as the foundation for TransInfo, its new system that reconciles and connects disparate asset data, seamlessly maintains a spatial representation of all information, and integrates all network assets and asset information systems.


Ascribing computational principles to neural feedback circuits is an important problem in theoretical neuroscience. We study symmet(cid:173) ric threshold-linear networks and derive stability results that go beyond the insights that can be gained from Lyapunov theory or energy functions. By applying linear analysis to subnetworks com(cid:173) posed of coactive neurons, we determine the stability of potential steady states. We find that stability depends on two types of eigen(cid:173) modes. One type determines global stability and the other type determines whether or not multistability is possible. We can prove the equivalence of our stability criteria with criteria taken from quadratic programming. Also, we show that there are permitted sets of neurons that can be coactive at a steady state and forbid(cid:173) den sets that cannot. Permitted sets are clustered in the sense that subsets of permitted sets are permitted and supersets of forbidden sets are forbidden. By viewing permitted sets as memories stored in the synaptic connections, we can provide a formulation of long(cid:173) term memory that is more general than the traditional perspective of fixed point attractor networks.


A Lyapunov-function can be used to prove that a given set of differential equations is convergent. For example, if a neural network possesses a Lyapunov-function, then for almost any initial condition, the outputs of the neurons converge to a stable steady state. In the past, this stability-property was used to construct attractor networks that associatively recall memorized patterns. Lyapunov theory applies mainly to symmetric networks in which neurons have monotonic activation functions [1, 2]. Here we show that the restriction of activation functions to threshold-linear ones is not a mere limitation, but can yield new insights into the computational behavior of recurrent networks (for completeness, see also [3]).


We present three main theorems about the neural responses to constant inputs. The first theorem provides necessary and sufficient conditions on the synaptic weight ma(cid:173) trix for the existence of a globally asymptotically stable set of fixed points. These conditions can be expressed in terms of copositivity, a concept from quadratic pro(cid:173) gramming and linear complementarity theory. Alternatively, they can be expressed in terms of certain eigenvalues and eigenvectors of submatrices of the synaptic weight matrix, making a connection to linear systems theory. The theorem guarantees that


the network will produce a steady state response to any constant input. We regard this response as the computational output of the network, and its characterization is the topic of the second and third theorems.


In the second theorem, we introduce the idea of permitted and forbidden sets. Under certain conditions on the synaptic weight matrix, we show that there exist sets of neurons that are "forbidden" by the recurrent synaptic connections from being coactivated at a stable steady state, no matter what input is applied. Other sets are "permitted," in the sense that they can be coactivated for some input. The same conditions on the synaptic weight matrix also lead to conditional multistability, meaning that there exists an input for which there is more than one stable steady state. In other words, forbidden sets and conditional multistability are inseparable concepts.


The existence of permitted and forbidden sets suggests a new way of thinking about memory in neural networks. When an input is applied, the network must select a set of active neurons, and this selection is constrained to be one of the permitted sets. Therefore the permitted sets can be regarded as memories stored in the synaptic connections.


Our third theorem states that there are constraints on the groups of permitted and forbidden sets that can be stored by a network. No matter which learning algorithm is used to store memories, active neurons cannot arbitrarily be divided into permitted and forbidden sets, because subsets of permitted sets have to be permitted and supersets of forbidden sets have to be forbidden.


Requests for name changes in the electronic proceedings will be accepted with no questions asked. However name changes may cause bibliographic tracking issues. Authors are asked to consider this carefully and discuss it with their co-authors prior to requesting a name change in the electronic proceedings.


Stochastic gradient descent (SGD) remains the method of choice for deep learning, despite the limitations arising for ill-behaved objective functions. In cases where it could be estimated, the natural gradient has proven very effective at mitigating the catastrophic effects of pathological curvature in the objective function, but little is known theoretically about its convergence properties, and it has yet to find a practical implementation that would scale to very deep and large networks. Here, we derive an exact expression for the natural gradient in deep linear networks, which exhibit pathological curvature similar to the nonlinear case. We provide for the first time an analytical solution for its convergence rate, showing that the loss decreases exponentially to the global minimum in parameter space. Our expression for the natural gradient is surprisingly simple, computationally tractable, and explains why some approximations proposed previously work well in practice. This opens new avenues for approximating the natural gradient in the nonlinear case, and we show in preliminary experiments that our online natural gradient descent outperforms SGD on MNIST autoencoding while sharing its computational simplicity.


Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.


Deep learning (DL) has been playing a significant role in many fields of science, including imaging and medical image processing. Recently, DL techniques have also been applied to medical image reconstruction1,2,3,4. It has been shown that DL approaches can outperform state-of-the-art compressed sensing (CS)5 techniques to translate the signal acquired by an imaging device into a usable medical image by means of a suitable domain transformation.


In this work we introduce a minimal linear network (MLN) for magnetic resonance (MR) image reconstruction capable of outperforming best available CS and dictionary learning alternatives, and show its applicability under benchmark simulation tests and challenging imaging conditions, where near artefact-free images were obtained. We emphasize model simplicity, allowing us to probe into the elements that contribute to the recent successes of DL-based MR image reconstruction.


DL approaches to MR image reconstruction that have been investigated to date include: (a) using Neural Networks (NN) to improve images post-hoc after standard reconstruction1; (b) reconstructing from the signal domain (k-space) by Fourier transformation (FT)2, typically in combination with multi-coil parallel imaging reconstruction6,7; (c) training an NN for the full transformation from signal into image domain through a representation manifold4; (d) mimicking CS iterative reconstruction techniques while allowing more versatility through non-linear operations3,8,9, following the assumption that MR images lie in a restricted manifold and using ideas from DL as learnt representation features of the data from training samples.


Linear neural networks have been previously analyzed10, and were shown to converge in a manner similar to non-linear networks11, in general domains. Nonlinear activation functions for Neural Networks on the complex field12 have been applied to e.g. MR fingerprinting13 reconstruction14.


Of important note is that MRI differs fundamentally from other imaging modalities: images are inherently complex, with important information contained in the phase; they are acquired indirectly by sampling a different domain (spatial frequency i.e. Fourier domain known as k-space); they are multidimensional, with different axes being encoded in a different manner. Commonly, a readout (RO) axis acquired continuously at high bandwidth, and phase-encoding (PE) axis acquired step-wise at low bandwidth (both in the spatial frequency domain), and a slice-select (SS) axis, acquired directly in the image domain by slice-selective excitation. Further dimensions/axes include multiple receive channels (for parallel imaging), timeseries or any other data dimension that varies during the scan (e.g. diffusion weighting or direction).

3a8082e126
Reply all
Reply to author
Forward
0 new messages