Itoften happens that Junos installation at loader prompt create problem I have faced the situation many a times. After going through lot of documents about Junos and FreeBsd I have devised a procedure which works without any problem and 100 success rate
get_versions should return the list of names of all versions of the origindefined at self.url by the default constructor; and get_default_versionshould return the name of the default version (usually the latest stable release).
Next, get_package_info takes as argument a version name(as returned by get_versions) and yields (branch_name, p_info) tuples,where branch_name is a string and pkg_info is an instanceof the NewPackageInfo class we defined earlier.
Each of these tuples should match a single file the loader will downloadfrom the origin. Usually, there is only one file per versions, but this is nottrue for all package repositories (eg. CRAN and PyPI allow multiple version artifactsper version).
The base PackageLoader will then take care of calling get_versions()to get all the versions, then call get_package_info() get the listof archives to download, download them, and load all the directories in the archive.
The final step for your minimal loader to work, is to implement build_release.This is a very important part, as it will create a release object that will beinserted in Software Heritage, as a link between origins and the directories.
author and committer (resp. date and committer_date) may be differentif the release was written and published by different people (resp. dates).This is only relevant when loading from VCS, so you can usually ignore itin you package loader.
As we do not want tests to directly query an origin (it makes tests flaky, hard toreproduce, and put unnecessary load on the origin), we usually mock it usingthe swh.core.pytest_plugin.requests_mock_datadir() fixture
The files in the datadir/ will then be served whenever the loader tries to accessan URL. This is very dependent on the kind of repositories your loader will read from,so here is an example with the PyPI loader.
In the previous sections, you wrote a fully functional loader for a new type ofpackage repository. This is great! Please tell us about it, andsubmit it for review so we can give you some feedback early.
If release objects are generated from extrinsic fields (ie. not extracted fromthe archive, such as authorship information added by the package repository)two different package versions with the same tarball would end up with thesame release number; causing the loader to create incorrect snapshots.
This is used for example by the PyPI loader (with a sha256sum) and the NPM loader(with a sha1sum).The Debian loader uses a similar scheme: as a single package is assembled froma set of tarballs, it only uses the hash of the .dsc file, which itself containsa hash of all the tarballs.
Unfortunately, this does not work for all packages, as some package repositories donot provide a checksum of the archives via their API.If this is the case of the repository you want to load from, you need to find a wayaround it.
Alternatively, if this is not good enough for your loader, you can simply not implementExtIDs, and your loader will always load all tarballs.This can be bandwidth-heavy for both Software Heritage and the origin you are loaded from,so this decision should not be taken lightly.
Why unique to the loader? Because different loaders may load the same archivedifferently.For example, if I was to create an archive with both a PKG-INFOand a package.json file, and submit it to both NPM and PyPI,both package repositories would have exactly the same tarball.But the NPM loader would create the release based on authorship info inpackage.json, and the PyPI loader based on PKG-INFO.But we do not want the PyPI loader to assume it already created a release itself,while the release was created by the NPM loader!
Finally, an optional step: collecting and loading extrinsic metadata.This is metadata that your loader may collect while loading an origin.For example, the PyPI loader collects some parts of the API response(eg. )
This is done by adding them to the directory_extrinsic_metadata attribute ofyour NewPackageInfo object when creating it in get_package_infoas swh.loader.package.loader.RawExtrinsicMetadataCore objects:
format should be a human-readable ASCII string that unambiguously describesthe format. Readers of the metadata object will have a built-in list of formatsthey understand, and will check if your metadata object is among them.You should use one of the known metadata formatsif possible, or add yours to this list.
metadata is the metadata object itself. When possible, it should be copied verbatimfrom the source object you got, and should not be created by the loader.If this is not possible, for example because it is extracted from a largerJSON or XML document, make sure you do as little modifications as possible to reducethe risks of corruption.
In theory, you can write extrinsic metadata on any kind of objects, eg. by implementingswh.loader.package.loader.PackageLoader.get_extrinsic_origin_metadata(),swh.loader.package.loader.PackageLoader.get_extrinsic_snapshot_metadata();but this is rarely relevant in practice.Be sure to check if loader can find any potentially interesting metadata, though!
You also need to implement a new method on your loader class, to return informationon where the metadata is coming from, called a metadata authority.This authority is identified by a URI, such as for GitHub, for PyPI, etc.For example:
Congratulations, you made it to the end.If you have not already, please contact us to tell us about your new loader,and submit your loader for review on our forgeso we can merge it and run it along our other loaders to archive more repositories.
As it turns out, this isn't so simple because you have to make/install a python package with your templates in it, which introduces a lot of needless complexity, especially if you have no intention of distributing your code.
This document describes the API to Jinja and not the template language(for that, see Template Designer Documentation). It will be most useful as referenceto those implementing the template interface to the application and notthose who are creating Jinja templates.
Jinja uses a central object called the template Environment.Instances of this class are used to store the configuration and global objects,and are used to load templates from the file system or other locations.Even if you are creating templates from strings by using the constructor ofTemplate class, an environment is created automatically for you,albeit a shared one.
This will create a template environment with a loader that looks uptemplates in the templates folder inside the yourapp Pythonpackage (or next to the yourapp.py Python module). It also enablesautoescaping for HTML files. This loader only requires that yourappis importable, it figures out the absolute path to the folder for you.
The high-level API is the API you will use in the application to load andrender Jinja templates. The Low Level API on the other side is onlyuseful if you want to dig deeper into Jinja or develop extensions.
The core component of Jinja is the Environment. It containsimportant shared variables like configuration, filters, tests,globals and others. Instances of this class may be modified ifthey are not shared and if no template was loaded so far.Modifications on environments after the first template was loadedwill lead to surprising effects and undefined behavior.
If set to True the XML/HTML autoescaping feature is enabled bydefault. For more details about autoescaping seeMarkup. As of Jinja 2.4 this can alsobe a callable that is passed the template name and has toreturn True or False depending on autoescape should beenabled by default.
The size of the cache. Per default this is 400 which meansthat if more than 400 templates are loaded the loader will cleanout the least recently used template. If the cache size is set to0 templates are recompiled all the time, if the cache size is-1 the cache will not be cleaned.
If a template was created by using the Template constructoran environment is created automatically. These environments arecreated as shared environments which means that multiple templatesmay have the same anonymous environment. For all shared environmentsthis attribute is True, else False.
Create a new overlay environment that shares all the data with thecurrent environment except for cache and the overridden attributes.Extensions cannot be removed for an overlayed environment. An overlayedenvironment automatically gets all the extensions of the environment itis linked to plus optional extra extensions.
Creating overlays should happen after the initial environment was setup completely. Not all attributes are truly linked, some are justcopied over so modifications on the original environment may not shinethrough.
Creates a new Undefined object for name. This is usefulfor filters or functions that may return undefined objects forsome operations. All parameters except of hint should be providedas keyword parameters for better readability. The hint is used aserror message for the exception if provided, otherwise the errormessage will be generated from obj and name automatically. The exceptionprovided as exc is raised if something with the generated undefinedobject is done that the undefined object does not allow. The defaultexception is UndefinedError. If a hint is provided thename may be omitted.
If it the name or obj is known (for example because an attributewas accessed) it should be passed to the undefined object, even ifa custom hint is provided. This gives undefined objects thepossibility to enhance the error message.
Finds all the templates the loader can find, compiles themand stores them in target. If zip is None, instead of in azipfile, the templates will be stored in a directory.By default a deflate zip algorithm is used. To switch tothe stored algorithm, zip can be set to 'stored'.
3a8082e126