The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
The periaqueductal gray (PAG) has been commonly recognized as a downstream site in neural networks for the expression of a variety of behaviors and is thought to provide stereotyped responses. However, a growing body of evidence suggests that the PAG may exert more complex modulation of a number of behavioral responses and work as a unique hub supplying primal emotional tone to influence prosencephalic sites mediating complex aversive and appetitive responses. Of particular relevance, we review how the PAG is involved in influencing complex forms of defensive responses, such as circa-strike and risk assessment responses in animals. In addition, we discuss putative dorsal PAG ascending paths that are likely to convey information related to threatening events to cortico-hippocampal-amygdalar circuits involved in the processing of fear learning. Finally, we discuss the evidence supporting the role of the PAG in reward seeking and note that the lateral PAG is part of the circuitry related to goal-oriented responses mediating the motivation to hunt and perhaps drug seeking behavior.
These pages are missing a Portuguese translation (or don't have the correct interwiki-link). If a page has a translation on the Portuguese wiki, please remove this category from that page by removing the pt-br from the template MissingTranslations and create an interwiki-link to the translated page (e.g. [[pt-br:madeira]] if you are on the wood-page). For general info about translating this wiki, see Translations.
The Ramsey approach to policy analysis finds the best competitive equilibrium given a set of available instruments. This approach is silent about unique implementation, namely designing policies so that the associated competitive equilibrium is unique. This silence is particularly problematic in monetary policy environments where many ways of specifying policy lead to indeterminacy. We show that sophisticated policies which depend on the history of private actions and which can differ on and off the equilibrium path can uniquely implement any desired competitive equilibrium. A large literature has argued that monetary policy should adhere to the Taylor principle to eliminate indeterminacy. Our findings say that adherence to the Taylor principle on these grounds is unnecessary. Finally, we show that sophisticated policies are robust to imperfect information.
In standard approaches to monetary policy, interest rate rules often lead to indeterminacy. Sophisticated policies, which depend on the history of private actions and can differ on and off the equilibrium path, can eliminate indeterminacy and uniquely implement any desired competitive equilibrium. Two types of sophisticated policies illustrate our approach. Both use interest rates as the policy instrument along the equilibrium path. But when agents deviate from that path, the regime switches, in one example to money; in the other, to a hybrid rule. Both lead to unique implementation, while pure interest rate rules do not. We argue that adherence to the Taylor principle is neither necessary nor sufficient for unique implementation with pure interest rate rules but is sufficient with hybrid rules. Our results are robust to imperfect information and may provide a rationale for empirical work on monetary policy rules and determinacy.
Briefly reviews the potential consequences of electronic money for the management of the government's balance sheet through open market operations and for the regulations governing the public and private issue of payment instruments.
This paper aims at an improved understanding of the relationship between monetary policy and racial inequality. We investigate the distributional effects of monetary policy in a unified framework, linking monetary policy shocks both to earnings and wealth differentials between black and white households. Specifically, we show that, although a more accommodative monetary policy increases employment of black households more than white households, the overall effects are small. At the same time, an accommodative monetary policy shock exacerbates the wealth difference between black and white households, because black households own less financial assets that appreciate in value. Over multi-year time horizons, the employment effects are substantially smaller than the countervailing portfolio effects. We conclude that there is little reason to think that accommodative monetary policy plays a significant role in reducing racial inequities in the way often discussed. On the contrary, it may well accentuate inequalities for extended periods.
Recent monetary history has been characterized by monetary authorities that appear to shift periodically between distinct policy regimes associated with higher or lower average rates of money creation. As policy regimes are not directly observable and as the rate of monetary expansion varies for reasons other than regime changes, the general public must form beliefs over current monetary policy based on historical realizations of money growth rates. Depending on the parameters governing the behaviour of monetary policy, beliefs (and therefore inflation forecasts) may evolve very slowly in the wake of actual regime changes, thereby exacerbating the costs of a disinflation policy. The quantitative importance of slowly adjusting beliefs is evaluated in the context of a computable general equilibrium model.
I develop a heterogeneous-agent New-Keynesian model featuring racial inequality in income and wealth, and studies interactions between racial inequality and monetary policy. Black and Hispanic workers gain more from accommodative monetary policy than White workers mainly due to higher labor market risks. Their gains are larger also because of a larger proportion of them are hand-to-mouth, while wealthy White workers gain more from asset price appreciation. Monetary and fiscal policies are substitutes in providing insurance against cyclical labor market risks. Racial minorities gain even more from an accommodative monetary policy in the absence of income-dependent fiscal transfers.
We provide an introduction to optimal fiscal and monetary policy using the primal approach to optimal taxation. We use this approach to address how fiscal and monetary policy should be set over the long run and over the business cycle. We find four substantive lessons for policymaking: Capital income taxes should be high initially and then roughly zero; tax rates on labor and consumption should be roughly constant; state-contingent taxes on assets should be used to provide insurance against adverse shocks; and monetary policy should be conducted so as to keep nominal interest rates close to zero. We begin optimal taxation in a static context. We then develop a general framework to analyze optimal fiscal policy. Finally, we analyze optimal monetary policy in three commonly used models of money: a cash-credit economy, a money-in-the-utility-function economy, and a shopping-time economy.
Why is inflation persistently high in some periods and low in others? The reason may be absence of commitment in monetary policy. In a standard model, absence of commitment leads to multiple equilibria, or expectation traps, even without trigger strategies. In these traps, expectations of high or low inflation lead the public to take defensive actions, which then make accommodating those expectations the optimal monetary policy. Under commitment, the equilibrium is unique and the inflation rate is low on average. This analysis suggests that institutions which promote commitment can prevent high inflation episodes from recurring.
We deduce properties of optimal monetary policies based on modern theory and standard empirical findings. In light of this analysis, we examine FOMC policy procedures and conclude that they put too much emphasis on short-term economic stabilization and too little emphasis on longer-term price stability. We propose a form of inflation targeting to address this problem.
Neste artigo resolvemos o problema de programao geomtrica no convexa ou signomial. As estratgias encontradas na literatura para resolver este problema so basicamente mtodos branch and bound ou condensao que transformam localmente o problema em um problema convexo. A estratgia apresentada difere substancialmente das existentes, pois formulamos o problema como um problema de otimizao da diferena de funes convexas (DC) em sua forma padro. Foram tambm desenvolvidas as condies necessrias e suficientes para a existncia de solues globais. O desafio existente na forma padro devido a uma restrio g(t)>= 1 com g convexo. Essa dificuldade contornada usando a desigualdade clssica entre as mdias aritmtica e harmnica ponderada, o que nos permite escrever as condies de otimalidade DC como um problema de programao geomtrica convexa e usar um mtodo de ponto interior preditor-corretor dual primal para resolver usando a fase preditora para atualizar os pesos. Um mtodo de ponto interior resolve o problema de programao geomtrica dual e a transformao exponencial encontra a soluo primal. Desenvolvemos o algoritmo na linguagem Fortran 90 e aplicamos num conjunto teste de problemas da literatura para validao. O mtodo proposto resolveu todos os problemas avaliados e os resultados computacionais so apresentados juntamente com as solues
In this paper we solve the non-convex or signomial geometric programming problem. The strategies found in the literature to solve this problem are basically branch and bound or condensation methods that locally transform the problem into a convex problem. The presented strategy differs substantially from the existing ones, since we formulate the problem as an optimization problem of the difference of convex functions in its standard form. The necessary and sufficient conditions for the existence of global solutions have also been developed. The existing challenge in the standard form is due to a constraint g(t) >= 1 with g convex. Such difficulty is circumvented by using the classical inequality between the weighted arithmetic and harmonic means, which allows us to write the DC optimality conditions as a convex geometric programming problem, and to use a primal dual predictor-corrector interior-point method to solve it, using the predictor phase to update weights. The interior-point method solves the dual geometric programming problem and the exponential transformation finds the primal solution. We developed the algorithm on the Fortran 90 language and applied a set of test problems from the literature for validation. The proposed method solved all evaluated problems and the computational results are presented along with the solutions
d3342ee215