Nxs features can be enabled in each of these types of repositories. Just as each repository is unique and may not exactly fit in one of these categories, the way Nx is used will vary between repositories.
A package-based repo is a collection of packages that depend on each other via package.json files and nested node_modules. With this setup, you typically have different dependencies for each project. Build tools like Jest and Webpack work as usual, since everything is resolved as if each package was in a separate repo and all of its dependencies were published to npm. Moving an existing package into a package-based repo is very easy since you generally leave that package's existing build tooling untouched. Creating a new package inside the repo is just as difficult as spinning up a new repo since you have to create all the build tooling from scratch.
An integrated repo contains projects that depend on each other through standard import statements. There is typically a single version of every dependency defined at the root. Sometimes build tools like Jest and Webpack need to be wrapped to work correctly. It's harder to add an existing package to this repo style because the build tooling for that package may need to be modified. It's straightforward to add a brand-new project to the repo because all the tooling decisions have already been made.
Nx plugins, especially the generators, executors and migrations that come with them, are not only valuable for a monorepo scenario. In fact, many developers use Nx not primarily for its monorepo support, but for its tooling support, particularly its ability to modularize a codebase and, thus, better scale it. Nx supports standalone applications, which are like an integrated monorepo setup, but with just a single, root-level application. Think of it as an advanced, more capable Create-React-App or Angular CLI. And obviously, you can still leverage all the generators and executors and structure your application into libraries or submodules.
Nx itself doesn't care which style you choose. You can use all the features of Nx whether you are in a package based repo or integrated repo. Certain Nx features will be more or less valuable for a standalone app, but all the features of Nx are still available to be put in place as soon as that repo grows to include more apps.
The comparison between package-based repos and integrated repos is similar to that between JSDoc and TypeScript. The former is easier to adopt and provides some good benefits. The latter takes more work but offers more value, especially at a larger scale.
An integrated repo offers a lot of features at the cost of some flexibility, but sometimes you want to take back control of your build system or dependency management for a single project in the repo. This recipe shows you how to do that.
Because this is a package-based project, we won't use Nx to generate the project. We can create the project in whatever way is typical for the framework you're trying to add. This could mean using the framework's own CLI or manually adding files yourself.
Any task you define in the scripts section of the project's package.json can be executed by Nx. These scripts can be cached and orchestrated in the same way a target defined in project.json is. If you want to define some tasks in project.json and some tasks in package.json, Nx will read both and merge the configurations.
Because this is a package-based project, you'll be managing your own updates for this project. In addition, you'll need to be careful when running the nx migrate command on the rest of the repo. There may be times where a migration changes code in this package-based project when it shouldn't. You'll need to manually revert those changes before committing.
An integrated Nx repo does not lock you into only using Nx plugins for all of your projects. You can always opt-out of using certain features and take on the responsibility of managing that functionality yourself.
I wonder what is the general preference and where does it come from. Having used FreeBSD actively for a couple of years I'm leaning towards Gentoo, but I've had an unpleasant experience of wasting valuable time because Gentoo was installed on a really old machine with unfunny build times.
Had it been something prebuilt package based, less time would be wasted. (I know FreeBSD has binary packages, but problem with these are that they're seemingly unmaintained past a version release and the only way to get fresh stuff is to compile it yourself. Don't know about Gentoo really).
Not because we care for childish funroll-loops speed tuning. What we care about is the flexibility of installing precisely what we want, not every package feature and dependency that a contributor thinks we might one day want. We are perfectly comfortable with how Linux, compiling and libraries work. We don't want everything to be abstracted away from us into a black box.
Whilst I empathise with the situation you describe, it is not something that is symptomatic of source-based distributions or indeed prevented by using a package-based distribution. It is the result of:
Truth be told, I haven't extensively used any other Linux distros for quite some time now. For this reason, if you were to place a Debian machine (for example) onto my lap and tell me that OpenLDAP wasn't working after a reboot, then I too might spend 15 minutes or an hour to resolve the issue. Not because I don't have a good understanding of Linux or that Debian isn't a good OS, but primarily because I don't recall the intimate details of Debian's RC scripts or package system. This is why internal documentation is essential within an organisation. It should serve to jump-start the unfamiliar and to fill in the gaps of everyone else. Even if they do know everything.
Just a quick note specifically about Gentoo. Portage's USE flags are incredibly useful and "minimal" is something that I use frequently. I don't for instance want the server binaries of a package to be installed on a machine that will only ever be a client during it's lifetime. Having them unnecessarily present may increase complexity or even be a security concern. It's never an issue of space. You can see what USE flags and dependencies a package is going to adopt (and which it isn't) before it starts compiling by using the -av arguments. So you shouldn't get an surprises.
What I will say is that I use Gentoo and Ubuntu. I used to use Gentoo for all my GNU/Linux machines but decided that Ubuntu on my desktops was easier to manage. I am on the side of admins that think using Gentoo for a server is a great idea -- and use it for almost all of my personal servers.
Apart from all that, what I like more than building stuff from source or tweaking CFLAGS is Portage. Portage is simply hands down the best package and system management tool I have ever used for GNU/Linux. I can create an exact environment with just the things I need and not one thing more and I can do it with out going outside of my package management system. I don't have to have postfix installed simply because some packaging dude decided that it's 'necessary'. I don't need mono installed because I want to run Gnome, I simply don't install Tomboy. Quickly browsing the installed packages on this Ubuntu desktop shows that it has bind installed? Why? It serves no purpose, I'm not running a DNS server on this computer and I certainly don't need the documentation for it, but it's here.
The downside... time. It takes not only the time to build packages but to keep the system maintained. I can be realativly sure that I can run a sudo aptitude update upgrade and not have a care in the world... On the other hand you have to be very careful of the small details when you update a Gentoo machine and make sure that you need (and are prepared) for what it wants to do. The 'straw that broke the camels back' per say of why I switched to Ubuntu for my desktops was an upgrade of udev that borked a setup I had -- rather than try and fix it I wanted to get the work I was trying to do done. So I figured I really don't care if bind documentation is taking up space on my desktop because I have the space.
On my servers, however, totally different matter. I want to control everything about the environment, the packages that are installed, every little detail I want to be in control of -- and Gentoo let's me do that with ease.
You can always use a combination of both methods - use a pre-compiled distribution as a base and have the advantages of a good working integrated environment and fast-painless updates while using self-compiled software where one needs to be at the leading-edge of the development of that software. Either by having some self-discipline (using configure with the prefix option) or by packaging the source yourself. You can always "reuse" the build instructions (rpm spec file or debian subdirectory for example).
I wouldn't use the build mechanism on production servers directly. From my Gentoo days I know that there's a possibility to have a build host and create the packages needed on that host so that you can burn cpu cycles on a dedicated host with the compile flags suitable for your needs.
Overall there isn't too much difference, nearly all of our servers run Debian but we ended up having a build host anyway. We needed a few software packages that weren't available as debs so we set up a reprepro and a buildhost for our needs, the buildhost is pretty much unmaintained (in terms of "does it work") because redeployment is just a matter of a few minutes since we can netboot it and it will automatically redeploy itself.
Either BSD ports/Gentoo/Debian it doesn't matter that much the real profit comes with using a system (as in collection of services on different hosts that provide business value) that will minimize wasted resources and is maintainable. We chose Debian because security updates are a no brainer and they are pre-compiled so that we don't have to "waste" time rebuilding packages whenever a security update comes out. That is about the only drawback I can think of for source based vs. binary based distros.
3a8082e126