Draft Final Report Meaning

0 views
Skip to first unread message

Maybell Hughs

unread,
Jul 27, 2024, 8:12:08 PM7/27/24
to farfomeli

This article was co-authored by Alexander Ruiz, M.Ed.. Alexander Ruiz is an Educational Consultant and the Educational Director of Link Educational Institute, a tutoring business based in Claremont, California that provides customizable educational plans, subject and test prep tutoring, and college application consulting. With over a decade and a half of experience in the education industry, Alexander coaches students to increase their self-awareness and emotional intelligence while achieving skills and the goal of achieving skills and higher education. He holds a BA in Psychology from Florida International University and an MA in Education from Georgia Southern University.

This article has been fact-checked, ensuring the accuracy of any cited facts and confirming the authority of its sources.

This article has been viewed 94,737 times.

draft final report meaning


Download File >>> https://tiurll.com/2zSJLS



The draft is a very important stage in developing a good report. It is the stage at which the ideas are formed in detail, the writing is clarified and diagrams and such are added in, yet the work isn't finalized. This is the time when others read the report, add their input, suggestions and critique; they may find errors, make amendments and reroute the content in certain ways. As such, the draft report needs to be good enough to be "almost" ready but done with a view to making various amendments after it's clear what is in need of improving.

In the event of a national emergency which required a draft, the following sections provide information on the Sequence of Events, the different Classifications which have been used in the past, Postponements, Deferments, and Exemptions, and the peacetime Medical Draft.

Selective Service activates and orders all personnel to report for duty. Reserve Force Officers, along with selected military retirees, begin to open Area Offices to accept registrant claims. Local, District Appeal, and National Board members are notified to report for refresher training.

A publicly attended, nationally televised and live-streamed lottery is conducted. The lottery, a random drawing of birthdays and numbers, establishes the order in which individuals receive orders to report for induction. The first to receive induction orders are those whose 20th birthday falls during the year of the lottery. If required, additional lotteries are conducted for those 21, 22, 23, 24, 25, 19, and finally 18.5 years-old. Learn more here.

Induction notices are sent and registrants may now make claims if desired for a postponement, deferment or exemption. Inductees report to a local Military Entrance Processing Station (MEPS) for induction. At MEPS, registrants are given a physical, mental, and moral evaluation to determine whether they are fit for military service. Once notified of the results of the evaluation, a registrant will either be inducted into military service or sent home.

According to current Department of Defense (DoD) requirements, Selective Service must deliver the first inductees to the military within 193 days from the onset of a crisis and the law being updated to authorize a draft.

A registrant can file a claim only after receipt of an order to report for induction and before the day he is scheduled to report. Only in the case of an extreme emergency, under circumstances beyond his control, would a registrant be allowed to file a claim on the day he is scheduled to report for induction.

It will not be necessary for the registrant to submit supporting evidence of his claim at the time he files the request form. He will be contacted and given instructions on what information is needed, where to send it, and when it should be sent.

The Health Care Personnel Delivery System (HCPDS) is a standby plan developed for the Selective Service System at the request of Congress. If needed it would be used to draft health care personnel in a crisis. It is designed to be implemented in connection with a national mobilization in an emergency, and then only if Congress and the President approve the plan and pass and sign legislation to enact it. No portion of the plan is designed for implementation in peacetime. If implemented, HCPDS would:

Evaluation reports should represent a thoughtful, well-researched, and well-organized effort to objectively evaluate the subject of the evaluation (e.g., strategy, project, activity) and should be readily understood, identifying key points clearly, distinctly and succinctly.
Evaluation findings should be presented as analyzed facts, evidence, and data and not based on anecdotes, hearsay, or simply the compilation of people's opinions. Conclusions should clearly be based on the evaluation findings.

Evaluation methodology should be explained in detail and sources of information properly identified. Limitations to the evaluation should be adequately disclosed in the report, with particular attention to the limitations associated with the evaluation methodology (selection bias, recall bias, unobservable differences between comparator groups, etc.).

To ensure a high-quality evaluation report, the draft report must undergo a peer review organized by the office that is managing the evaluation. The OU should review the evaluation report against ADS 201maa, Criteria to Ensure the Quality of the Evaluation Report. OUs may also involve peers from relevant Regional and/or Technical Bureaus in the review process as appropriate (see ADS 201sai, Managing the Peer Review of a Draft Evaluation Report).

We can also get a prediction of how much compute will be _available_ by predicting the cost of compute in a given year (which we have a decent amount of past evidence about), and predicting the maximum amount of money an actor would be willing to spend on a single training run. The probability that we can train a transformative model in year Y is then just the probability that the compute _requirement_ for year Y is less than the compute _available_ in year Y.

Given that AI companies each have around $100B cash on hand, and could potentially borrow additional several hundreds of billions of dollars (given their current market caps and likely growth in the worlds where AI still looks promising), it seems likely that low hundreds of billions of dollars could be spent on a single run by 2040, corresponding to a doubling time (from $1B in 2025) of about 2 years.

We can talk about the rate at which synapses fire in the human brain. How can we convert this to FLOP? The author proposes the following hypothetical: suppose we redo evolutionary history, but in every animal we replace each neuron with N [floating-point units]( -point_unit) that each perform 1 FLOP per second. For what value of N do we still get roughly human-level intelligence over a similar evolutionary timescale? The author then does some calculations about simulating synapses with FLOPs, drawing heavily on the (@How Much Computational Power It Takes to Match the Human Brain@), to estimate that N would be around 1-10,000, which after some more calculations suggests that the human brain is doing the equivalent of 1e13 - 1e16 FLOP per second, with **a median of 1e15 FLOP per second**, and a long tail to the right.

We also need to estimate how many epochs will be needed, i.e. how many times we train on any given data point. The author decides not to explicitly model this factor since it will likely be close to 1, and instead lumps in the uncertainty over the number of epochs with the uncertainty over the constant factor in the scaling law above. We can then look at language model runs to estimate a scaling law for them, for which the median scaling law predicts that we would need 1e13 data points for our 3e14 parameter model.

So we now have six different paths: the three neural net anchors (short, medium and long horizon), the genome anchor, the lifetime anchor, and the evolution anchor. We can now assign weights to each of these paths, where each weight can be interpreted as the probability that that path is the _cheapest_ way to get a transformative model, as well as a final weight that describes the chance that none of the paths work out.

The lifetime anchor would suggest that we either already could get TAI, or are very close, which seems very unlikely given the lack of major economic applications of neural nets so far, and so gets assigned only 5%. The genome path gets 10%, the evolution anchor gets 10%, and the remaining 10% is assigned to none of the paths working out.

2. When thinking about any particular path and making it more concrete, a host of problems tend to show up that will need to be solved and may add extra time. Some examples include robustness, reliability, possible breakdown of the scaling laws, the need to generate lots of different kinds of data, etc.

In the near future, it seems likely that it would be harder to find cheaper routes (since there is less time to do the research), so we should probably assume that the probabilities are overestimates, and for similar reasons for later years the probabilities should be treated as underestimates.

I place 30% on short horizons, 40% on medium horizons, and 10% on long horizons. The report already names several reasons why we might expect the long horizon assumption to be too conservative. I agree with all of those, and have one more of my own:

While the report does talk about challenges in e.g. getting the right data and environments by the right time, I think there are a bunch of other challenges as well: for example, you need to ensure that your model is aligned, robust, and reliable (at least if you want to deploy it and get economic value from it). I do expect that these challenges will be easier than they are today, partly because more research will have been done and partly because the models themselves will be more capable.

Thanks for doing this, this is really good!

Some quick thoughts, will follow up later with more once I finish reading and digesting:

--I feel like it's unfair to downweight the less-compute-needed scenarios based on recent evidence, without also downweighting some of the higher-compute scenarios as well. Sure, I concede that the recent boom in deep learning is not quite as massive as one might expect if one more order of magnitude would get us to TAI. But I also think that it's a lot bigger than one might expect if fifteen more are needed! Moreover I feel that the update should be fairly small in both cases, because both updates are based on armchair speculation about what the market and capabilities landscape should look like in the years leading up to TAI. Maybe the market isn't efficient; maybe we really are in an AI overhang.

--If we are in the business of adjusting our weights for the various distributions based on recent empirical evidence (as opposed to more a priori considerations) then I feel like there are other pieces of evidence that argue for shorter timelines. For example, the GPT scaling trends seem to go somewhere really exciting if you extrapolate it four more orders of magnitude or so.

--Relatedly, GPT-3 is the most impressive model I know of so far, and it has only 1/1000th as many parameters as the human brain has synapses. I think it's not crazy to think that maybe we'll start getting some transformative shit once we have models with as many parameters as the human brain, trained for the equivalent of 30 years. Yes, this goes against the scaling laws, and yes, arguably the human brain makes use of priors and instincts baked in by evolution, etc. But still, I feel like at least a couple percentage points of probability should be added to "it'll only take a few more orders of magnitude" just in case we are wrong about the laws or their applicability. It seems overconfident not to. Maybe I just don't know enough about the scaling laws and stuff to have as much confidence in them as you do.

64591212e2
Reply all
Reply to author
Forward
0 new messages