Ifyou are using a ZSH Framework the associated asdf plugin may need to be updated to use the new ZSH completions properly via fpath. The Oh-My-ZSH asdf plugin is yet to be updated, see ohmyzsh/ohmyzsh#8837.
Completions are configured by either a ZSH Framework asdf or will need to be configured as per Homebrew's instructions. If you are using a ZSH Framework the associated plugin for asdf may need to be updated to use the new ZSH completions properly via fpath. The Oh-My-ZSH asdf plugin is yet to be updated, see ohmyzsh/ohmyzsh#8837.
On macOS, starting a Bash or Zsh shell automatically calls a utility called path_helper. path_helper can rearrange items in PATH (and MANPATH), causing inconsistent behavior for tools that require specific ordering. To workaround this, asdf on macOS defaults to forcily adding its PATH-entries to the front (taking highest priority). This is controllable with the ASDF_FORCE_PREPEND variable.
asdf performs a version lookup of a tool in all .tool-versions files from the current working directory up to the $HOME directory. The lookup occurs just-in-time when you execute a tool that asdf manages.
Without a version listed for a tool execution of the tool will error. asdf current will show you the tool & version resolution, or absence of, from your current directory so you can observe which tools will fail to execute.
Some OSs already have tools installed that are managed by the system and not asdf, python is a common example. You need to tell asdf to pass the management back to the system. The Versions reference section will guide you.
Can the recursive search time through subdirectories be reduced by explicitly listing the directories of the projects I'm currently working on? What is the :source-registry format for doing this? (I can't find it in the ASDF Manual. Would prefer not to simply add the directories to asdf:*central-registry* in the old style.)
One way to solve this problem is to let Quicklisp do it for you. If you install your systems under QL's local-projects directory then it will do the search, once, and then cache the results. It is quite smart about this:
This works by keeping a cache of system file pathnames in /local-projects/system-index.txt. Whenever the timestamp on the local projects directory is newer than the timestamp on the system index file, the entire tree is re-scanned and cached.
Even better: if it decides it needs to redo the search it does it when you try to load the first system, not before. So image startup time is unaffected and you pay the cost when you expect to pay it. QL is ... quite well-written.
Apparently asdf only includes a precompiled version of python that depends on openssl 1.1. You could also consider using GitHub - cachix/nixpkgs-python: All Python versions, kept up-to-date on hourly basis using Nix., which also holds most python versions, like asdf, but built using Nix. From what I could tell these packages use openssl 3.
For the actual benchmarking I like hyperfine, and it exports results to CSV and JSON so it was perfect for this. All I had to do was write a Bash script that wires up git for-each-ref and git checkout and then runs hyperfine for each command on each version. The resulting shell script is attached. Gathering the data with this benchmarking script is straightforward.
The benchmarking script produced a directory of CSV files containing the results. Each file represented a single benchmark of a single command at a single version. I wanted to generate graphs of the performance of each command across all versions of asdf and for that I knew I needed to combine the CSV files. I wrote a script that combined all the benchmarking data for each command into a single CSV. This script was a bit sloppy but it worked. Now I have one CSV file for each command, with each row in it containing the benchmark of it at a specific version of asdf.
asdf reshim is by far the slowest of all the commands benchmarked. It has also gotten significantly slower in the last couple of releases of asdf. It currently averages 24 seconds on my laptop! Granted, I have a lot of shims on my laptop and the reshim process has a complexity of O(n). Despite this I think there are things we can do to significantly improve these numbers in future releases.
In 2019 One Data Model (OneDM) was started to bring several IoT SDOs and IoT device and platform vendors together under a broad, multi-party liaison agreement, with a goal of arriving at a common set of data and interaction models that describe IoT devices. After some exploratory work this resulted in a successful proposal to create the ASDF WG.
As a common language for writing down these models, the Semantic Definition Format went through the IETF process, producing (draft-ietf-asdf-sdf). This SDF Base specification has now reached WG Consensus, to be published. SDF represents these models in JSON, enabling re-use of specification formats such as CDDL (RFC 8610) and the formats proposed at
json-schema.org and their tooling, for describing both the SDF format itself and the structure of the data to be modelled in SDF.
SDF does not directly address data serialization. Instead, SDF focuses solely on modeling the structure and semantics of the data being exchanged. Consequently, the task of data serialization (including RPC semantics) is delegated to other standards, which are typically established by existing IoT SDOs.
The ASDF WG has developed SDF into a standards-track specification for thing interaction and data modeling. In the process of developing this specification, further functional requirements have emerged that can be addressed as extensions to the base SDF specification.
As work evolves, ASDF will observe and may want to interact with IRTF Research Groups such as the Usable Formal Methods Research Group (UFMRG). ASDF will work with Thing-to-Thing Research Group (T2TRG) and its WISHI (Work on IoT Semantic/Hypermedia Interoperability, ) program to engage researchers and other SDOs in this space, such as W3C Web of Things, which is working on Thing Models and related specifications.
A hierarchical, human-readable metadata format (implemented using YAML)Numerical arrays are stored as binary data blocks which can be memorymapped. Data blocks can optionally be compressed.The structure of the data can be automatically validated using schemas(implemented using JSON Schema)Native Python data types (numerical types, strings, dicts, lists) areserialized automaticallyASDF can be extended to serialize custom data typesASDF is under active development on github. More information on contributingcan be found below.
Out of the box, the asdf package automatically serializes anddeserializes native Python types. It is possible to extend asdf byimplementing custom tags that correspond to custom user types. Moreinformation on extending ASDF can be found in the officialdocumentation.
We welcome feedback and contributions to the project. Contributions ofcode, documentation, or general feedback are all appreciated. Pleasefollow the contributing guidelines to submit anissue or a pull request.
Last year, I wrote a post titled Install Java with asdf and slightly surprising to me, it ended up becoming the most visited article on my personal blog. Given that, I decided to write another more complete guide to asdf. Even though this guide is meant for macOS, most things covered here should apply to Linux systems too, potentially with some minor tweaks.
Say you work as a developer for a company and their tech stack is backend Ruby on Rails and frontend React. There are quite a number of repositories for different services and unsurprisingly not all of them use the same versions of Ruby or Node.js.
Thanks to its plugin system, asdf is extendable enough for you to install and manage versions of almost all programming languages that you might want to use. And with asdf you only need to learn one set of simple commands to do that.
I had a fairly interesting situation at work recently. On this project, the backend server and frontend client each lives in a subdirectory in the same repository and we are in the process of developing a new client app to replace the old one.
In order to run the old client together with the server, I made another copy of the whole project, set a local Node.js version to 10.22.0 in the new directory and run the old client. For the server, since the local Node.js version is already set to 14.16.1 in the original project directory, I could still start it in as normal.
That certainly worked fine for me. But later I learned that there is a much simpler way: to use an asdf shell version. Without making an extra copy of the project, I could simply start a new shell session in the project directory and set a shell version for Node.js by:
So basically asdf allows you to select different versions of programming languages on a per directory basis, and on top of that you have the option to set a shell version which only affects the current shell session.
You could run asdf current to get a list of current versions of installed programming languages in the current directory. For example, say we are in the legacy project directory, where a local Node.js version is set, but no Ruby version is set. What you get should be something like this:
Coming back to the scenario mentioned at the beginning of this article, where you work for a company which uses Ruby on Rails for backend and React for frontend and different projects might have different language version requirements.
After introducing asdf you no longer have to deal with different tools for managing versions of different programming languages, which is great. But obviously when starting to work on a different project for the first time, everyone still need to get the correct local versions installed.
Luckily as it turns out there is a much better way: running asdf install without any arguments. If a required version is not installed yet, asdf will go ahead and install it; if a version is already installed, it will tell you that and does nothing.
3a8082e126