Ifthe idea of libraries as frontline responders in the opioid crisis sounds far-fetched, look no further than the Denver Public Library. In February 2017, a twenty-five-year-old man suffered a fatal overdose in one of its bathrooms. That prompted the library to lay in a supply of Narcan, a drug used to counteract opioid overdoses. Other libraries, including the San Francisco Public Library, have followed suit and begun to stock the life-saving drug.
To keep up with changing technology and user expectations, public libraries have invested in more computer terminals and Wi-Fi capability. They have upgraded and expanded facilities to provide more outlets, meeting rooms, study spaces, and seating that patrons can use for extended periods of time as they take advantage of free Wi-Fi.
In 2018, NEH launched a new program for Infrastructure and Capacity-Building Challenge Grants to support brick-and-mortar library projects as well as other efforts to strengthen the institutional base for the humanities in America. For example, the Hartford Public Library in Michigan received a 2019 NEH grant of $400,000 to construct a new library and community center, making available cultural and educational resources for the southwest area of the state.
With an NEH grant of $315,000, the University of California, San Francisco, Library, collaborating with San Francisco Public Library and the Gay, Lesbian, Bisexual, Transgender Historical Society, will digitize 150,000 pages from 49 archival collections related to the early days of the AIDS epidemic in the Bay Area and make them accessible online.
Since 1970, the American Library Association has received 66 NEH grants, totaling $32,006,701 for projects ranging from bookshelf programs such as Muslim Journeys to traveling exhibits on topics such as the Dust Bowl and the African-American baseball experience, to reading and discussion series such as the Federal Writers Project and the Columbian Quincentenary. In 2018, ALA received an NEH grant of $397,255 to conduct the Great Stories Club, a nationwide program for at-risk teens on themes of empathy, heroism, and marginalization.
Libraries are the central resource for supporting faculty and students in their research and information needs, both physically and remotely. This essential role of libraries and library faculty has remained consistent amid significant technological and pedagogical changes within the community college system. For this paper, the terms library faculty and librarian are used interchangeably to reinforce the faculty status of community college librarians. As librarians continue to determine their other roles within the California Community Colleges System and local districts in response to evolving demands, the inclusion and engagement of library faculty in college decision-making processes, program development, and other academic and professional matters are critical.
Just as each student body and community is diverse with its own characteristics, needs, and goals, so are each of the libraries throughout the California community colleges. This paper provides encouragement for library faculty, administrators, and staff to apply the various recommendations outlined throughout its text to meet their individual campus needs and requirements in providing impactful and equitable library instruction and services.
The All of Us Researcher Workbench houses one of the largest, richest, and most diverse biomedical datasets of its kind. It currently includes data from electronic health records, wearables, and surveys, as well as physical measurements and genomic data. Campus libraries are well-situated to respond to the need for data-driven research at academic institutions. This webinar aimed to teach viewers about the data available, how to register for the Workbench, and how their academic library and campus community can use the All of Us dataset. The webinar offered 1 Continuing Education Credit from the Medical Library Association and is approved for the Data Services Specialization.
Roles let you automatically load related vars, files, tasks, handlers, and other Ansible artifacts based on a known file structure. After you group your content into roles, you can easily reuse them and share them with other users.
An Ansible role has a defined directory structure with seven main standard directories. You must include at least one of these directories in each role. You can omit any directories the role does not use. For example:
meta/main.yml - metadata for the role, including role dependencies and optional Galaxy metadata such as platforms supported. This is required for uploading into galaxy as a standalone role, but not for using the role in your play.
On stand alone roles you can also include custom modules and/or plugins, for example library/my_module.py, which may be used within this role (see Embedding modules and plugins in roles for more information).
Directories defaults and vars may also include nested directories. If your variables file is a directory, Ansible reads all variables files and directories inside in alphabetical order. If a nested directory contains variables files as well as directories, Ansible reads the directories first. Below is an example of a vars/main directory:
If you store your roles in a different location, set the roles_path configuration option so Ansible can find your roles. Checking shared roles into a single location makes them easier to use in multiple playbooks. See Configuring Ansible for details about managing settings in ansible.cfg.
If using tags with tasks in a role, be sure to also tag your pre_tasks, post_tasks, and role dependencies and pass those along as well, especially if the pre/post tasks and role dependencies are used for monitoring outage window control or load balancing. See Tags for details on adding and using tags.
You can reuse roles dynamically anywhere in the tasks section of a play using include_role. While roles added in a roles section run before any other tasks in a play, included roles run in the order they are defined. If there are other tasks before an include_role task, the other tasks will run first.
When you add a tag to an include_role task, Ansible applies the tag only to the include itself. This means you can pass --tags to run only selected tasks from the role, if those tasks themselves have the same tag as the include statement. See Selectively running tagged tasks in reusable files for details.
Beginning with version 2.11, you may choose to enable role argument validation based on an argumentspecification. This specification is defined in the meta/argument_specs.yml file (or with the .yamlfile extension). When this argument specification is defined, a new task is inserted at the beginning of role executionthat will validate the parameters supplied for the role against the specification. If the parameters failvalidation, the role will fail execution.
Ansible also supports role specifications defined in the role meta/main.yml file, as well. However,any role that defines the specs within this file will not work on versions below 2.11. For this reason,we recommend using the meta/argument_specs.yml file to maintain backward compatibility.
When role argument validation is used on a role that has defined dependencies,then validation on those dependencies will run before the dependent role, even if argument validation failsfor the dependent role.
Ensure that the default value in the docs matches the default value in the code. The actualdefault for the role variable will always come from the role defaults (as defined in Role directory structure).
Ansible only executes each role once in a play, even if you define it multiple times unless the parameters defined on the role are different for each definition. For example, Ansible only runs the role foo once in a play like this:
If you pass different parameters in each role definition, Ansible runs the role more than once. Providing different variable values is not the same as passing different role parameters. You must use the roles keyword for this behavior, since import_role and include_role do not accept role parameters.
Role dependencies are prerequisites, not true dependencies. The roles do not have a parent/child relationship. Ansible loads all listed roles, runs the roles listed under dependencies first, then runs the role that lists them. The play object is the parent of all roles, including roles called by a dependencies list.
Ansible always executes roles listed in dependencies before the role that lists them. Ansible executes this pattern recursively when you use the roles keyword. For example, if you list role foo under roles:, role foo lists role bar under dependencies in its meta/main.yml file, and role bar lists role baz under dependencies in its meta/main.yml, Ansible executes baz, then bar, then foo.
Ansible treats duplicate role dependencies like duplicate roles listed under roles:: Ansible only executes role dependencies once, even if defined multiple times, unless the parameters, tags, or when clause defined on the role are different for each definition. If two roles in a play both list a third role as a dependency, Ansible only runs that role dependency once, unless you pass different parameters, tags, when clause, or use allow_duplicates: true in the role you want to run multiple times. See Galaxy role dependencies for more details.
Role deduplication does not consult the invocation signature of parent roles. Additionally, when using vars: instead of role params, there is a side effect of changing variable scoping. Using vars: results in those variables being scoped at the play level. In the below example, using vars: would cause n to be defined as 4 throughout the entire play, including roles called before it.
In addition to the above, users should be aware that role de-duplication occurs before variable evaluation. This means that Lazy Evaluation may make seemingly different role invocations equivalently the same, preventing the role from running more than once.
3a8082e126