julysy washbourn saunderson

0 views
Skip to first unread message

Saurabh Cloudas

unread,
Aug 2, 2024, 10:47:28 PM8/2/24
to googlimepoon

Monthly mean lakewide water levels (from 1918 to present) including long-term averages, maximums, minimums, and monthly means. It also includes daily Great Lakes water levels and real-time gage data for connecting channels water levels.

Information on Great Lakes regulation, including Lake Superior outflow and compensating works gate settings, in addition to information on connecting channel discharge measurements, and information on conveyance-change monitoring.

Data for recent net basin-supply conditions including precipitation, evaporation, runoff, snow water equivalent, ice cover, and surface water temperature, as well as long term-basin trends for lakes Superior, Michigan-Huron, Erie, and Ontario.

All Great Lakes Water Management data contained herein is preliminary in nature and therefore subject to change. The data is for general information purposes ONLY and SHALL NOT be used in technical applications such as, but not limited to, studies or designs. All critical data should be obtained from and verified by the United States Army Corps of Engineers Detroit District, Engineering and Construction Division, Hydraulics and Hydrology Branch, 477 Michigan Avenue, Detroit, MI 48226. The United States of America assumes no liability for the completeness or accuracy of the data contained herein and any use of such data inconsistent with this disclaimer shall be solely at the risk of the user.

The official website of the U.S. Army Corps of Engineers Great Lakes and Ohio River Division. Contact lrd-we...@usace.army.mil for website corrections.DISCLAIMER: This is an official U.S. Army Corps of Engineers (USACE) website, however hyperlinked locations do not constitute a USACE endorsement. USACE does not exercise any editorial control over the information at linked locations.

The purpose of a strategic assessment is to determine if an organization is achieving its strategic objectives. This is often a difficult process to implement, given normal staff aversion to introspective processes and a lack of doctrine specific to assessments. The purpose of this article is to discuss best practices and common pitfalls in military assessments while outlining steps needed to continue to improve assessments across the Department of Defense (DOD). First, we outline the doctrine and literature guiding the DOD. Second, we provide a review of common assessment methods used across the military. Next, we present the four best practices proven successful in the joint staff, strategic commands, and recent conflicts. Last, we provide recommendations on how to improve the state of assessments in the DOD.

Joint Publication (JP) 5-0, Joint Operation Planning, and JP 3-0, Joint Operations, provide doctrine to the joint force on the staff processes and methods from receipt of mission through developing and implementing a vision and strategy.1 For implementation of assessments, JP 5-0 and Joint Doctrine Note 1-15, Operation Assessment, provide general frameworks for implementing an assessment process within a joint staff.2 While joint doctrine reserves comment on methods and techniques, multiservice doctrine compensates for this shortfall, outlining existing methods, assisted by a number of journal articles describing successful methods used in Iraq and Afghanistan.3 The process for gap assessment run by the joint staff to collect data from the combatant commands (CCMDs), outlined in various policies and instructions, is conducted through the Annual Joint Assessment (AJA, formerly known as the Comprehensive Joint Assessment, CJA) and tasked in the Guidance for Employment of the Force.4 The joint staff recently added additional policy providing common joint terminology for risk in its publication of the Chairman of the Joint Chiefs of Staff Manual (CJCSM) 3105.01, Joint Risk Analysis, which allows clear communication of the results of an assessment from one echelon to another.5

The consistent themes across doctrine include descriptions of common staffing processes such as boards, bureaus, and working groups; discussion of data calls and data collection during the assessment process; and emphasis on commander involvement, while continuing to adhere to legacy terms from effects-based assessment. Literature, mostly from federally funded research and development centers, provides current methods in assessments, while doctrine only partially assists the joint force in informing assessment methods, as we outline later in this article. While doctrine provides an overview of how to implement a process and a few of the main techniques, neither doctrine nor other supporting military publications provide clear guidance on best practices. This lack of guidance contributes to a joint environment where there is no authoritative delineation between good and bad practices, and display techniques for condensing and conveying assessments of data.

To understand best practices, leaders should recognize inadequate assessment methods in use across the DOD and their corresponding narratives in data displays. Three characteristics prevail among these techniques: lack of standards, subjective data displays, and inadequate source material. These methods and techniques, using monikers defined by their display, include thermographs, standardless stoplights, color averages, simple arrows, indices, one-hundred-point scales, and effects-based assessment. With little literature and no joint doctrine to provide assessment teams the foundation to cite the faults of these methods, it is difficult for commands to leave these techniques behind.6 This article provides knowledge to inform leadership and empower assessment teams to build their credibility with other staff sections by building their expertise in assessment methods. The paragraphs below describe these inadequate methods and explain why each is a poor assessment technique.

The standardless stoplight, consisting of a red-amber-green scale, is the most common form of assessment and is essentially a simplified thermograph (see figure 2). A common practice is to use these colors to create a subjective display, or an evaluation of progress without parameters, absolving the briefing agency of accountability for evaluating progress against a verifiable standard in their assessment. Every stoplight chart should have, at a minimum, a legend providing the short version of what the colors mean on the chart and a written narrative fully detailing the standards-based bins in reserve.

Indices comprise a weighted average of normalized data. The purpose of an index is to have a single indicator summarizing an aspect of a problem (see figure 5). Indices are useful when experts agree on the weights applied to the input data, and the data is used to compare like items, such as state fragility indices. (They combine scores measuring two essential qualities of state performance: effectiveness and legitimacy; these two quality indices combine scores on distinct measures of the key performance dimensions of security, governance, economics, and social development.) Most indices for assessments are not transparent enough to provide value, such as when multiple indicators contribute to the increase or decrease of an index, hiding the key indicators. Further, weighted averages assume a consistent linear relationship and quality data collection, rarely found in the complex problems the military attempts to measure. Making transparency even more difficult, assessors often leverage proxies for many indicators when substantial data does not exist, thereby degrading the legitimacy of insights analysis may provide.

One-hundred-point scales source data through a survey, with multiple subordinate commands and/or directorates voting on the status of an objective using a scale of 1 to 100 with the overall score being the average of the votes. While there are general rules on the scoring for these surveys, our ability to measure the difference between natural states is not refined enough for the assessor to discern the difference between, for instance, 67 and 68, rendering measurement to this fidelity, and the corresponding assessment conclusions, meaningless.

So we might ask ourselves why we continue to use these methods? Quite simply, assessment team members are very often assigned without sufficient education, training, or prior experience in assessments. Even if assessments personnel have experience, there is little documentation for them to use as references for their methods when meeting organizational resistance within their own staff. The next section provides alternative, proven methods that are manageable in their implementation.

Strategic questions. In determining progress and gaps for a given LOE or IMO, several common questions arise. Recording these questions is a practice in many assessment programs because it allows those responsible for the assessment a method to record, in detail, the assumptions and the logical lines followed by working groups to determine why they believe they are progressing or retrogressing. In reviewing these questions on a periodic basis, the working groups revisit their assumptions and their progress, considering changes in the operational environment. While strategic questions are sometimes informed by indicators, indicators are not required if the question is qualitative in nature. Some example questions are shown in figure 6.12

Third, standards-based binning facilitates gap analysis. By listing the current state and the desired state, working groups can determine future operations or activities required to move between bins and associated capability, capacity, and authority gaps bridging between the two states. Last, binning provides a method to hold subordinate commands and staff accountable for their evaluation; the evaluator must provide evidence that an IMO is in a bin. The process results in a method of clearly rating the progress toward an objective. An example of a standards-based scale, or binning, is shown on the left side of figure 7.

c01484d022
Reply all
Reply to author
Forward
0 new messages