Irealize this is quite old ... but I found something interesting yesterday. This thread contains an answer by Chris Forbes which suggests that it is indeed possible to have items of variable height, if the treeview has the TVS_NONEVENHEIGHT style. I played around with this idea and the github code snippet he linked to, and found that this actually does work but does not provide 100% flexibility (see list of limitations below). I'm also not sure if it would be suitable to change the height of an item on mouse hover.
It still does need lots of work, as the OwnerDraw code does not yet do anything regarding drawing icons, tree lines, checkboxes and the like, but I thought it was a pretty surprising find, worthy to be posted here.
That is not possible with the System.Windows.Forms.TreeView nor with any 3rd party replacement I'm aware of. The capabilities of the standard treeview are right about those shown in the Windows Explorer folder view.
If you are using an icon the easiest way is to set the icon size... this atleast worked in the compact framework, I set the image height to the height i wanted the object. you could possibly just have an icon that matches your background if you don't want an icon.
This is inherently wasteful of computation because it ignores semanticequivalence of results in favour of only binary equivalence, and we want toreduce the wasted computation. However, for production use cases, computationis cheaper than the possibility of incremental bugs.
This can be equivalently phrased in terms of lacking "build identity": isthere any way that the system knows what the "previous version" of the "samebuild" is? A postmodern build system doesn't have build identity, because itcauses problems for multitenancy among other things: who decides what theprevious build is?
Distributed builds: We live in a world where software can be compiled muchfaster by using multiple machines. Fortunately, turning the build into purecomputations almost inherently allows distributing it.
This means that, given a not-yet-executed action returning a and a functiontaking the resolved result of that action, you get a new action whose shapedepends an arbitrarily large amount on the result of m a. This is a dynamicbuild plan since the full knowledge of the build plan requires executing m a.
Applicative: a build for which the plan is statically known. Generally thisimplies a strictly two-phase build where the targets are evaluated, a buildplan made, and then the build executed. This is so named because of thecentral operation on applicative types:
This means, given a predefined pure function inside a build, the function canbe executed to perform the build. But, the shape of the build plan is knownahead of time, since the function cannot execute other builds.
As much of a Nix shill as I am, Nix is not the postmodern build system. It hassome design flaws that are very hard to rectify. Let's write about the thingsit does well, that are useful to adopt as concepts elsewhere.
Nix is a build system based on the idea of a "derivation". A derivation issimply a specification of an execution of execve. Its output is then storedin the Nix store (/nix/store/*) based on a name determined by hashing inputsor outputs. Memoization is achieved by skipping builds for which the outputpath already exists.
This mechanism lacks build identity, and is multitenant: you can dump a wholebunch of different Nix projects of various versions on the same build machineand they will not interfere with each other because of the lack of buildidentity; the only thing to go off of is the hash.
Central to the implementation of Nix is the idea of making execve pure. This isa brilliant idea that allows it to be used with existing software, and probablywould be necessary at some level in a postmodern build system.
The way that Nix purifies execve is through the idea of "unknown input xorunknown output". A derivation is either input-addressed, with a fully specifiedenvironment that the execve is run in (read: no network) or output-addressed,where the output has a known hash (with network allowed).
With ca-derivations (RFC), Nix additionallydeduplicates input-addressed derivations with identical outputs, such thatchanges in how something is built without changing its output avoid rebuilds.This solves a large problem in practice since NixOS has a huge build cluster tocope with having to rebuild the entire known universe when one byte of glibcchanges, even if it will not influence downstream things. There are currentprojects to deduplicate the public binary cache, which is hundreds of terabytesin size, by putting it on top of a content-addressed store, independently ofNix itself doing content-addressing.
This idea of memoized pure execve is brilliant because it purifies impurebuild systems by ensuring that they are run in a consistent environment, whilealso providing some manner of incrementalism. Nix is a source-based buildsystem but in practice, most things can be fetched from the binary cache,making it merely a question of time whether a binary or source build is used.
Nix is a monadic build system, but in a lot of contexts such as in nixpkgs, dueto NixOS Hydra using restrict-eval (which I don't think is documentedanywhere, but IFD is banned), it is restricted to only applicative actions inpractice, for the most part.
Import from derivation: Nix evaluation can demand abuild before continuing further evaluation. Unfortunately, the Nix evaluatorcannot resume remaining other evaluation while waiting for abuild, so in practice, IFD tends to seriously blow up evaluationtimes by repeated blocking and loss of parallelism.
Dynamic derivations: The build plans of derivations can bederivations themselves, which are then actually built to continue the build.This crucially allows the Nix evaluator to not block evaluation for monadicdependencies even though the final build plan isn't fully resolved.
Nix can run monadic build systems inside of a derivation, but since it is notmonadic in current usage, builds of very large software wind up being executedin one giant derivation/build target, meaning that nothing is reused fromprevious builds.
Puck's Zilch intends to fix this by replacing the surface language of Nixwith a Scheme with native evaluation continuation, and integratingwith/replacing the inner build systems such as Ninja, Make, Go, and so on suchthat inner targets are turned into derivations, thus immediately obtainingincremental builds and cross-machine parallelism using the Nix daemon.
Even if we fixed the Nix surface to use monadic builds to make inner buildsystems pure and efficient, at some level, our problem build systems becomethe compilers. In many compilers that are not C/C++/Java, the build units aremuch larger (for example the Rust compilation unit is an entire crate!), so thecompilers themselves become build systems with incremental build support, andsince we are a trustworthy-incremental build system, we cannot reuse previousbuild products and cannot use the compilers' incrementalism as they pretty muchall expect build identity.
For example, in Haskell, there are two ways of running a build: running ghc --make to build an entire project as one process, or running ghc -c (likegcc -c in C) to generate .o files. The latter eats O(modules) startupoverhead in total, which is problematic.
The (Glorious Glasgow) Haskell compiler can reuse previous build products andprovide incremental compilation that considers semantic equivalence, but thatcan't be used for production builds in a trustworthy-incremental world sinceyou would have to declare explicit dependencies on the previous build productsand accept the possibility of incremental compilation bugs. Thus, ghc --makeis at odds with trustworthy-incremental build systems: since you cannot providea previous build product, you need to rebuild the entire project every time.
In fact, ghc --make is exactly what nixpkgs Haskell uses! This would bemuch more of a problem if it were actually feasible to do monadic builds inNix. In order to cut compile times, Gabriella Gonzales, Harry Garrood, I, andothers worked on figuring out how to pass in the previous incremental buildproducts.
A stunningly prescient talk from 2014 predicting that all ofcomputing will be eaten by JavaScript: operating systems become simpleJavaScript runtimes, but not because anyone writes JavaScript, and JavaScriptstarts eating lower and lower level pieces. It predicts a future whereJavaScript is the lingua franca compiler target that everything targets,eliminating "native code" as a going concern.
Of course, this didn't exactly happen, but it also didn't exactly not happen.WebAssembly is the JavaScript compiler target that is gathering nearlyuniversal support among compiled languages, and it allows for much bettersandboxing of what would have been native code in the past.
There is a blog that inspired a lot of this thinking about build systems:Houyhnhnm computing, or, as I will shorten it, the horseblog. Forthose unfamiliar, the horseblog series is an excellent read and I highlyrecommend it, as an alternative vision of computing.
Persistence by default, with the data store represented by algebraic datatypes with functions to migrate between versions, automaticbackup/availability everywhere, and automatic versioning of all kinds of data.
Hermetic build systems with determinism, reproducibility, input-addressing.They handle every size of program build, from building an OS to building aprogram. They deal gracefully with the golden rule of softwaredistributions: having only one version at a time implies aglobally coherent distribution.
Unison is a computing platform that is remarkably horseblog-brained. I don'tknow if there is any intentional connection, since given enough thought, theseare fairly unsurprising conclusions. Unison is a programming platformcontaining:
My take on Unison is that it is an extremely cool experiment, but it sadlyrequires a complete replacement of computing infrastructure at a level thatdoesn't seem likely. It won't solve our build system problems with otherlanguages today.
3a8082e126